I0611 10:54:12.242262 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0611 10:54:12.242834 7 e2e.go:124] Starting e2e run "c1cb240e-41cb-4926-b280-95071473e345" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591872850 - Will randomize all specs Will run 275 of 4992 specs Jun 11 10:54:12.309: INFO: >>> kubeConfig: /root/.kube/config Jun 11 10:54:12.358: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 11 10:54:12.428: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 11 10:54:12.474: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 11 10:54:12.475: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 11 10:54:12.475: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 11 10:54:12.491: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 11 10:54:12.491: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 11 10:54:12.491: INFO: e2e test version: v1.18.2 Jun 11 10:54:12.492: INFO: kube-apiserver version: v1.18.2 Jun 11 10:54:12.492: INFO: >>> kubeConfig: /root/.kube/config Jun 11 10:54:12.499: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:12.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jun 11 10:54:12.633: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 10:54:12.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662" in namespace "downward-api-4356" to be "Succeeded or Failed" Jun 11 10:54:12.666: INFO: Pod "downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662": Phase="Pending", Reason="", readiness=false. Elapsed: 17.417655ms Jun 11 10:54:14.671: INFO: Pod "downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021758487s Jun 11 10:54:16.675: INFO: Pod "downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025480509s STEP: Saw pod success Jun 11 10:54:16.675: INFO: Pod "downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662" satisfied condition "Succeeded or Failed" Jun 11 10:54:16.677: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662 container client-container: STEP: delete the pod Jun 11 10:54:16.783: INFO: Waiting for pod downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662 to disappear Jun 11 10:54:16.821: INFO: Pod downwardapi-volume-2fc2080e-cd68-4409-a2dd-405afff44662 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:16.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4356" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":4,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:16.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 10:54:16.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae" in namespace "downward-api-5675" to be "Succeeded or Failed" Jun 11 10:54:16.979: INFO: Pod "downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 24.075226ms Jun 11 10:54:18.983: INFO: Pod "downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027769001s Jun 11 10:54:20.988: INFO: Pod "downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032637564s STEP: Saw pod success Jun 11 10:54:20.988: INFO: Pod "downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae" satisfied condition "Succeeded or Failed" Jun 11 10:54:20.991: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae container client-container: STEP: delete the pod Jun 11 10:54:21.057: INFO: Waiting for pod downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae to disappear Jun 11 10:54:21.065: INFO: Pod downwardapi-volume-f33f1186-bf2e-4cfd-998d-85af73ceb3ae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:21.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5675" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":4,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:21.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-47e85f14-4933-4c18-9623-82ef265b5778 STEP: Creating a pod to test consume configMaps Jun 11 10:54:21.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562" in namespace "configmap-1824" to be "Succeeded or Failed" Jun 11 10:54:21.390: INFO: Pod "pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562": Phase="Pending", Reason="", readiness=false. Elapsed: 46.857378ms Jun 11 10:54:23.394: INFO: Pod "pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050608378s Jun 11 10:54:25.398: INFO: Pod "pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054206644s STEP: Saw pod success Jun 11 10:54:25.398: INFO: Pod "pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562" satisfied condition "Succeeded or Failed" Jun 11 10:54:25.400: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562 container configmap-volume-test: STEP: delete the pod Jun 11 10:54:25.431: INFO: Waiting for pod pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562 to disappear Jun 11 10:54:25.442: INFO: Pod pod-configmaps-c2f5f335-4171-4184-ba9a-3e74a56a7562 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:25.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1824" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:25.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-1ba788a9-036d-4caf-856e-adc17e968c86 STEP: Creating a pod to test consume configMaps Jun 11 10:54:25.558: INFO: Waiting up to 5m0s for pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d" in namespace "configmap-6840" to be "Succeeded or Failed" Jun 11 10:54:25.574: INFO: Pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.493307ms Jun 11 10:54:27.605: INFO: Pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047393222s Jun 11 10:54:29.610: INFO: Pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d": Phase="Running", Reason="", readiness=true. Elapsed: 4.05237601s Jun 11 10:54:31.615: INFO: Pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056809915s STEP: Saw pod success Jun 11 10:54:31.615: INFO: Pod "pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d" satisfied condition "Succeeded or Failed" Jun 11 10:54:31.619: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d container configmap-volume-test: STEP: delete the pod Jun 11 10:54:31.647: INFO: Waiting for pod pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d to disappear Jun 11 10:54:31.666: INFO: Pod pod-configmaps-7105fa13-07b8-4c81-80b7-d439b60d103d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:31.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6840" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":27,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:31.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 11 10:54:37.962: INFO: DNS probes using dns-184/dns-test-ccd7e9a8-a385-4863-a10e-27700e8811d3 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:38.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-184" for this suite. • [SLOW TEST:6.389 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":5,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:38.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 11 10:54:38.238: INFO: Waiting up to 5m0s for pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d" in namespace "emptydir-1388" to be "Succeeded or Failed" Jun 11 10:54:38.478: INFO: Pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d": Phase="Pending", Reason="", readiness=false. Elapsed: 239.446297ms Jun 11 10:54:40.482: INFO: Pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244133145s Jun 11 10:54:42.487: INFO: Pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d": Phase="Running", Reason="", readiness=true. Elapsed: 4.248688615s Jun 11 10:54:44.491: INFO: Pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253075954s STEP: Saw pod success Jun 11 10:54:44.491: INFO: Pod "pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d" satisfied condition "Succeeded or Failed" Jun 11 10:54:44.495: INFO: Trying to get logs from node kali-worker pod pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d container test-container: STEP: delete the pod Jun 11 10:54:44.534: INFO: Waiting for pod pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d to disappear Jun 11 10:54:44.552: INFO: Pod pod-c6c4c5e4-98f2-45e8-8ecb-f4d687fee33d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:44.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1388" for this suite. • [SLOW TEST:6.496 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:44.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 11 10:54:44.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6886' Jun 11 10:54:47.691: INFO: stderr: "" Jun 11 10:54:47.691: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Jun 11 10:54:47.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6886' Jun 11 10:54:53.400: INFO: stderr: "" Jun 11 10:54:53.400: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:53.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6886" for this suite. • [SLOW TEST:8.846 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":7,"skipped":99,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:53.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:54:59.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5185" for this suite. • [SLOW TEST:6.009 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":8,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:54:59.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 10:54:59.496: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 11 10:55:01.544: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:02.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7670" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":9,"skipped":122,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:02.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 11 10:55:11.906: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:11.911: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:13.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:13.915: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:15.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:15.916: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:17.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:17.916: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:19.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:19.915: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:21.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:21.916: INFO: Pod pod-with-poststart-http-hook still exists Jun 11 10:55:23.911: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 11 10:55:23.915: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:23.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1632" for this suite. • [SLOW TEST:20.928 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:23.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 11 10:55:23.985: INFO: Waiting up to 5m0s for pod "pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603" in namespace "emptydir-2158" to be "Succeeded or Failed" Jun 11 10:55:23.989: INFO: Pod "pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073945ms Jun 11 10:55:26.019: INFO: Pod "pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0344915s Jun 11 10:55:28.024: INFO: Pod "pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039180149s STEP: Saw pod success Jun 11 10:55:28.024: INFO: Pod "pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603" satisfied condition "Succeeded or Failed" Jun 11 10:55:28.028: INFO: Trying to get logs from node kali-worker2 pod pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603 container test-container: STEP: delete the pod Jun 11 10:55:28.105: INFO: Waiting for pod pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603 to disappear Jun 11 10:55:28.188: INFO: Pod pod-eb0ef705-31c5-4bd2-b38a-60d4550dc603 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:28.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2158" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":156,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:28.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 10:55:28.327: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6bf40dc9-b46b-4ec6-8d10-2b2f5c1acb20" in namespace "security-context-test-2238" to be "Succeeded or Failed" Jun 11 10:55:28.337: INFO: Pod "busybox-user-65534-6bf40dc9-b46b-4ec6-8d10-2b2f5c1acb20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201207ms Jun 11 10:55:30.340: INFO: Pod "busybox-user-65534-6bf40dc9-b46b-4ec6-8d10-2b2f5c1acb20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012827229s Jun 11 10:55:32.345: INFO: Pod "busybox-user-65534-6bf40dc9-b46b-4ec6-8d10-2b2f5c1acb20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017604527s Jun 11 10:55:32.345: INFO: Pod "busybox-user-65534-6bf40dc9-b46b-4ec6-8d10-2b2f5c1acb20" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:32.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2238" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":161,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:32.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8709 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 11 10:55:32.471: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 11 10:55:32.564: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 11 10:55:34.843: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 11 10:55:36.569: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 11 10:55:38.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:40.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:42.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:44.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:46.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:48.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:50.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:52.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 10:55:54.583: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 11 10:55:54.590: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 11 10:55:58.617: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.192:8080/dial?request=hostname&protocol=udp&host=10.244.2.191&port=8081&tries=1'] Namespace:pod-network-test-8709 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 11 10:55:58.617: INFO: >>> kubeConfig: /root/.kube/config I0611 10:55:58.655893 7 log.go:172] (0xc002a93810) (0xc002951e00) Create stream I0611 10:55:58.655923 7 log.go:172] (0xc002a93810) (0xc002951e00) Stream added, broadcasting: 1 I0611 10:55:58.660203 7 log.go:172] (0xc002a93810) Reply frame received for 1 I0611 10:55:58.660269 7 log.go:172] (0xc002a93810) (0xc001a99180) Create stream I0611 10:55:58.660295 7 log.go:172] (0xc002a93810) (0xc001a99180) Stream added, broadcasting: 3 I0611 10:55:58.661874 7 log.go:172] (0xc002a93810) Reply frame received for 3 I0611 10:55:58.661916 7 log.go:172] (0xc002a93810) (0xc001a992c0) Create stream I0611 10:55:58.661941 7 log.go:172] (0xc002a93810) (0xc001a992c0) Stream added, broadcasting: 5 I0611 10:55:58.662967 7 log.go:172] (0xc002a93810) Reply frame received for 5 I0611 10:55:58.924677 7 log.go:172] (0xc002a93810) Data frame received for 3 I0611 10:55:58.924711 7 log.go:172] (0xc001a99180) (3) Data frame handling I0611 10:55:58.924731 7 log.go:172] (0xc001a99180) (3) Data frame sent I0611 10:55:58.925868 7 log.go:172] (0xc002a93810) Data frame received for 5 I0611 10:55:58.925914 7 log.go:172] (0xc001a992c0) (5) Data frame handling I0611 10:55:58.925940 7 log.go:172] (0xc002a93810) Data frame received for 3 I0611 10:55:58.925956 7 log.go:172] (0xc001a99180) (3) Data frame handling I0611 10:55:58.928391 7 log.go:172] (0xc002a93810) Data frame received for 1 I0611 10:55:58.928426 7 log.go:172] (0xc002951e00) (1) Data frame handling I0611 10:55:58.928444 7 log.go:172] (0xc002951e00) (1) Data frame sent I0611 10:55:58.928459 7 log.go:172] (0xc002a93810) (0xc002951e00) Stream removed, broadcasting: 1 I0611 10:55:58.928474 7 log.go:172] (0xc002a93810) Go away received I0611 10:55:58.929082 7 log.go:172] (0xc002a93810) (0xc002951e00) Stream removed, broadcasting: 1 I0611 10:55:58.929319 7 log.go:172] (0xc002a93810) (0xc001a99180) Stream removed, broadcasting: 3 I0611 10:55:58.929350 7 log.go:172] (0xc002a93810) (0xc001a992c0) Stream removed, broadcasting: 5 Jun 11 10:55:58.929: INFO: Waiting for responses: map[] Jun 11 10:55:58.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.192:8080/dial?request=hostname&protocol=udp&host=10.244.1.127&port=8081&tries=1'] Namespace:pod-network-test-8709 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 11 10:55:58.936: INFO: >>> kubeConfig: /root/.kube/config I0611 10:55:58.969492 7 log.go:172] (0xc002a93ef0) (0xc001a22500) Create stream I0611 10:55:58.969525 7 log.go:172] (0xc002a93ef0) (0xc001a22500) Stream added, broadcasting: 1 I0611 10:55:58.971919 7 log.go:172] (0xc002a93ef0) Reply frame received for 1 I0611 10:55:58.971975 7 log.go:172] (0xc002a93ef0) (0xc0023259a0) Create stream I0611 10:55:58.971990 7 log.go:172] (0xc002a93ef0) (0xc0023259a0) Stream added, broadcasting: 3 I0611 10:55:58.973005 7 log.go:172] (0xc002a93ef0) Reply frame received for 3 I0611 10:55:58.973038 7 log.go:172] (0xc002a93ef0) (0xc001a99360) Create stream I0611 10:55:58.973057 7 log.go:172] (0xc002a93ef0) (0xc001a99360) Stream added, broadcasting: 5 I0611 10:55:58.974199 7 log.go:172] (0xc002a93ef0) Reply frame received for 5 I0611 10:55:59.035570 7 log.go:172] (0xc002a93ef0) Data frame received for 3 I0611 10:55:59.035615 7 log.go:172] (0xc0023259a0) (3) Data frame handling I0611 10:55:59.035750 7 log.go:172] (0xc0023259a0) (3) Data frame sent I0611 10:55:59.036323 7 log.go:172] (0xc002a93ef0) Data frame received for 3 I0611 10:55:59.036347 7 log.go:172] (0xc0023259a0) (3) Data frame handling I0611 10:55:59.036640 7 log.go:172] (0xc002a93ef0) Data frame received for 5 I0611 10:55:59.036666 7 log.go:172] (0xc001a99360) (5) Data frame handling I0611 10:55:59.038229 7 log.go:172] (0xc002a93ef0) Data frame received for 1 I0611 10:55:59.038269 7 log.go:172] (0xc001a22500) (1) Data frame handling I0611 10:55:59.038302 7 log.go:172] (0xc001a22500) (1) Data frame sent I0611 10:55:59.038324 7 log.go:172] (0xc002a93ef0) (0xc001a22500) Stream removed, broadcasting: 1 I0611 10:55:59.038346 7 log.go:172] (0xc002a93ef0) Go away received I0611 10:55:59.038426 7 log.go:172] (0xc002a93ef0) (0xc001a22500) Stream removed, broadcasting: 1 I0611 10:55:59.038440 7 log.go:172] (0xc002a93ef0) (0xc0023259a0) Stream removed, broadcasting: 3 I0611 10:55:59.038445 7 log.go:172] (0xc002a93ef0) (0xc001a99360) Stream removed, broadcasting: 5 Jun 11 10:55:59.038: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:59.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8709" for this suite. • [SLOW TEST:26.715 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":170,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:59.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 11 10:55:59.156: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-897 /api/v1/namespaces/watch-897/configmaps/e2e-watch-test-resource-version 82b78b46-35e8-4dfa-848b-5c475356dc97 11504961 0 2020-06-11 10:55:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-11 10:55:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 11 10:55:59.177: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-897 /api/v1/namespaces/watch-897/configmaps/e2e-watch-test-resource-version 82b78b46-35e8-4dfa-848b-5c475356dc97 11504962 0 2020-06-11 10:55:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-11 10:55:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:55:59.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-897" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":14,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:55:59.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jun 11 10:55:59.395: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 11 10:55:59.411: INFO: Waiting for terminating namespaces to be deleted... Jun 11 10:55:59.415: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jun 11 10:55:59.422: INFO: netserver-0 from pod-network-test-8709 started at 2020-06-11 10:55:32 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.422: INFO: Container webserver ready: true, restart count 0 Jun 11 10:55:59.422: INFO: test-container-pod from pod-network-test-8709 started at 2020-06-11 10:55:54 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.422: INFO: Container webserver ready: true, restart count 0 Jun 11 10:55:59.422: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.422: INFO: Container kindnet-cni ready: true, restart count 3 Jun 11 10:55:59.422: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.422: INFO: Container kube-proxy ready: true, restart count 0 Jun 11 10:55:59.422: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jun 11 10:55:59.435: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.435: INFO: Container kindnet-cni ready: true, restart count 2 Jun 11 10:55:59.435: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.435: INFO: Container kube-proxy ready: true, restart count 0 Jun 11 10:55:59.435: INFO: netserver-1 from pod-network-test-8709 started at 2020-06-11 10:55:32 +0000 UTC (1 container statuses recorded) Jun 11 10:55:59.435: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16177805c7edbe75], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16177805cbbab4be], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 10:56:00.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-13" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":15,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 10:56:00.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-997 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-997 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-997 Jun 11 10:56:00.617: INFO: Found 0 stateful pods, waiting for 1 Jun 11 10:56:10.621: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 11 10:56:10.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 11 10:56:10.936: INFO: stderr: "I0611 10:56:10.784299 85 log.go:172] (0xc00003a420) (0xc0006835e0) Create stream\nI0611 10:56:10.784358 85 log.go:172] (0xc00003a420) (0xc0006835e0) Stream added, broadcasting: 1\nI0611 10:56:10.796671 85 log.go:172] (0xc00003a420) Reply frame received for 1\nI0611 10:56:10.796725 85 log.go:172] (0xc00003a420) (0xc0005b6a00) Create stream\nI0611 10:56:10.796752 85 log.go:172] (0xc00003a420) (0xc0005b6a00) Stream added, broadcasting: 3\nI0611 10:56:10.798165 85 log.go:172] (0xc00003a420) Reply frame received for 3\nI0611 10:56:10.798214 85 log.go:172] (0xc00003a420) (0xc000940000) Create stream\nI0611 10:56:10.798231 85 log.go:172] (0xc00003a420) (0xc000940000) Stream added, broadcasting: 5\nI0611 10:56:10.799185 85 log.go:172] (0xc00003a420) Reply frame received for 5\nI0611 10:56:10.894987 85 log.go:172] (0xc00003a420) Data frame received for 5\nI0611 10:56:10.895014 85 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:10.895032 85 log.go:172] (0xc000940000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 10:56:10.926591 85 log.go:172] (0xc00003a420) Data frame received for 5\nI0611 10:56:10.926632 85 log.go:172] (0xc00003a420) Data frame received for 3\nI0611 10:56:10.926767 85 log.go:172] (0xc0005b6a00) (3) Data frame handling\nI0611 10:56:10.926805 85 log.go:172] (0xc0005b6a00) (3) Data frame sent\nI0611 10:56:10.926822 85 log.go:172] (0xc00003a420) Data frame received for 3\nI0611 10:56:10.926836 85 log.go:172] (0xc0005b6a00) (3) Data frame handling\nI0611 10:56:10.926864 85 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:10.928668 85 log.go:172] (0xc00003a420) Data frame received for 1\nI0611 10:56:10.928690 85 log.go:172] (0xc0006835e0) (1) Data frame handling\nI0611 10:56:10.928711 85 log.go:172] (0xc0006835e0) (1) Data frame sent\nI0611 10:56:10.928725 85 log.go:172] (0xc00003a420) (0xc0006835e0) Stream removed, broadcasting: 1\nI0611 10:56:10.928865 85 log.go:172] (0xc00003a420) Go away received\nI0611 10:56:10.929017 85 log.go:172] (0xc00003a420) (0xc0006835e0) Stream removed, broadcasting: 1\nI0611 10:56:10.929034 85 log.go:172] (0xc00003a420) (0xc0005b6a00) Stream removed, broadcasting: 3\nI0611 10:56:10.929045 85 log.go:172] (0xc00003a420) (0xc000940000) Stream removed, broadcasting: 5\n" Jun 11 10:56:10.936: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 11 10:56:10.936: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 11 10:56:10.940: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 11 10:56:20.945: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 11 10:56:20.945: INFO: Waiting for statefulset status.replicas updated to 0 Jun 11 10:56:20.980: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:20.980: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:20.980: INFO: Jun 11 10:56:20.980: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 11 10:56:21.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977844892s Jun 11 10:56:23.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972230972s Jun 11 10:56:24.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.748666116s Jun 11 10:56:25.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.665977003s Jun 11 10:56:26.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.661359467s Jun 11 10:56:27.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.601628269s Jun 11 10:56:28.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.596101561s Jun 11 10:56:29.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.591187578s Jun 11 10:56:30.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 566.020038ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-997 Jun 11 10:56:31.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:56:31.636: INFO: stderr: "I0611 10:56:31.534961 104 log.go:172] (0xc000920000) (0xc000932000) Create stream\nI0611 10:56:31.535032 104 log.go:172] (0xc000920000) (0xc000932000) Stream added, broadcasting: 1\nI0611 10:56:31.538667 104 log.go:172] (0xc000920000) Reply frame received for 1\nI0611 10:56:31.538710 104 log.go:172] (0xc000920000) (0xc0007f9180) Create stream\nI0611 10:56:31.538722 104 log.go:172] (0xc000920000) (0xc0007f9180) Stream added, broadcasting: 3\nI0611 10:56:31.539738 104 log.go:172] (0xc000920000) Reply frame received for 3\nI0611 10:56:31.539783 104 log.go:172] (0xc000920000) (0xc0009320a0) Create stream\nI0611 10:56:31.539799 104 log.go:172] (0xc000920000) (0xc0009320a0) Stream added, broadcasting: 5\nI0611 10:56:31.540740 104 log.go:172] (0xc000920000) Reply frame received for 5\nI0611 10:56:31.627915 104 log.go:172] (0xc000920000) Data frame received for 3\nI0611 10:56:31.627944 104 log.go:172] (0xc0007f9180) (3) Data frame handling\nI0611 10:56:31.627971 104 log.go:172] (0xc0007f9180) (3) Data frame sent\nI0611 10:56:31.628001 104 log.go:172] (0xc000920000) Data frame received for 5\nI0611 10:56:31.628040 104 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0611 10:56:31.628070 104 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 10:56:31.628096 104 log.go:172] (0xc000920000) Data frame received for 3\nI0611 10:56:31.628148 104 log.go:172] (0xc0007f9180) (3) Data frame handling\nI0611 10:56:31.628193 104 log.go:172] (0xc000920000) Data frame received for 5\nI0611 10:56:31.628212 104 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0611 10:56:31.630007 104 log.go:172] (0xc000920000) Data frame received for 1\nI0611 10:56:31.630035 104 log.go:172] (0xc000932000) (1) Data frame handling\nI0611 10:56:31.630056 104 log.go:172] (0xc000932000) (1) Data frame sent\nI0611 10:56:31.630074 104 log.go:172] (0xc000920000) (0xc000932000) Stream removed, broadcasting: 1\nI0611 10:56:31.630109 104 log.go:172] (0xc000920000) Go away received\nI0611 10:56:31.630444 104 log.go:172] (0xc000920000) (0xc000932000) Stream removed, broadcasting: 1\nI0611 10:56:31.630465 104 log.go:172] (0xc000920000) (0xc0007f9180) Stream removed, broadcasting: 3\nI0611 10:56:31.630476 104 log.go:172] (0xc000920000) (0xc0009320a0) Stream removed, broadcasting: 5\n" Jun 11 10:56:31.636: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 11 10:56:31.636: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 11 10:56:31.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:56:31.948: INFO: stderr: "I0611 10:56:31.856937 126 log.go:172] (0xc000998000) (0xc000204d20) Create stream\nI0611 10:56:31.856998 126 log.go:172] (0xc000998000) (0xc000204d20) Stream added, broadcasting: 1\nI0611 10:56:31.859706 126 log.go:172] (0xc000998000) Reply frame received for 1\nI0611 10:56:31.859749 126 log.go:172] (0xc000998000) (0xc000a72000) Create stream\nI0611 10:56:31.859759 126 log.go:172] (0xc000998000) (0xc000a72000) Stream added, broadcasting: 3\nI0611 10:56:31.860492 126 log.go:172] (0xc000998000) Reply frame received for 3\nI0611 10:56:31.860516 126 log.go:172] (0xc000998000) (0xc000940000) Create stream\nI0611 10:56:31.860523 126 log.go:172] (0xc000998000) (0xc000940000) Stream added, broadcasting: 5\nI0611 10:56:31.861517 126 log.go:172] (0xc000998000) Reply frame received for 5\nI0611 10:56:31.912727 126 log.go:172] (0xc000998000) Data frame received for 5\nI0611 10:56:31.912761 126 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:31.912781 126 log.go:172] (0xc000940000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 10:56:31.936174 126 log.go:172] (0xc000998000) Data frame received for 5\nI0611 10:56:31.936228 126 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:31.936243 126 log.go:172] (0xc000940000) (5) Data frame sent\nI0611 10:56:31.936256 126 log.go:172] (0xc000998000) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0611 10:56:31.936266 126 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:31.936342 126 log.go:172] (0xc000940000) (5) Data frame sent\nI0611 10:56:31.936459 126 log.go:172] (0xc000998000) Data frame received for 3\nI0611 10:56:31.936497 126 log.go:172] (0xc000a72000) (3) Data frame handling\nI0611 10:56:31.936519 126 log.go:172] (0xc000a72000) (3) Data frame sent\n+ true\nI0611 10:56:31.936592 126 log.go:172] (0xc000998000) Data frame received for 3\nI0611 10:56:31.936621 126 log.go:172] (0xc000a72000) (3) Data frame handling\nI0611 10:56:31.936773 126 log.go:172] (0xc000998000) Data frame received for 5\nI0611 10:56:31.936792 126 log.go:172] (0xc000940000) (5) Data frame handling\nI0611 10:56:31.939366 126 log.go:172] (0xc000998000) Data frame received for 1\nI0611 10:56:31.939386 126 log.go:172] (0xc000204d20) (1) Data frame handling\nI0611 10:56:31.939398 126 log.go:172] (0xc000204d20) (1) Data frame sent\nI0611 10:56:31.939410 126 log.go:172] (0xc000998000) (0xc000204d20) Stream removed, broadcasting: 1\nI0611 10:56:31.939424 126 log.go:172] (0xc000998000) Go away received\nI0611 10:56:31.939984 126 log.go:172] (0xc000998000) (0xc000204d20) Stream removed, broadcasting: 1\nI0611 10:56:31.940014 126 log.go:172] (0xc000998000) (0xc000a72000) Stream removed, broadcasting: 3\nI0611 10:56:31.940041 126 log.go:172] (0xc000998000) (0xc000940000) Stream removed, broadcasting: 5\n" Jun 11 10:56:31.948: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 11 10:56:31.948: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 11 10:56:31.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:56:32.178: INFO: stderr: "I0611 10:56:32.096636 148 log.go:172] (0xc00003a160) (0xc000703540) Create stream\nI0611 10:56:32.096728 148 log.go:172] (0xc00003a160) (0xc000703540) Stream added, broadcasting: 1\nI0611 10:56:32.100838 148 log.go:172] (0xc00003a160) Reply frame received for 1\nI0611 10:56:32.100871 148 log.go:172] (0xc00003a160) (0xc000a9e000) Create stream\nI0611 10:56:32.100879 148 log.go:172] (0xc00003a160) (0xc000a9e000) Stream added, broadcasting: 3\nI0611 10:56:32.101964 148 log.go:172] (0xc00003a160) Reply frame received for 3\nI0611 10:56:32.102028 148 log.go:172] (0xc00003a160) (0xc000a62000) Create stream\nI0611 10:56:32.102045 148 log.go:172] (0xc00003a160) (0xc000a62000) Stream added, broadcasting: 5\nI0611 10:56:32.102992 148 log.go:172] (0xc00003a160) Reply frame received for 5\nI0611 10:56:32.170123 148 log.go:172] (0xc00003a160) Data frame received for 5\nI0611 10:56:32.170169 148 log.go:172] (0xc000a62000) (5) Data frame handling\nI0611 10:56:32.170183 148 log.go:172] (0xc000a62000) (5) Data frame sent\nI0611 10:56:32.170191 148 log.go:172] (0xc00003a160) Data frame received for 5\nI0611 10:56:32.170197 148 log.go:172] (0xc000a62000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0611 10:56:32.170216 148 log.go:172] (0xc00003a160) Data frame received for 3\nI0611 10:56:32.170222 148 log.go:172] (0xc000a9e000) (3) Data frame handling\nI0611 10:56:32.170233 148 log.go:172] (0xc000a9e000) (3) Data frame sent\nI0611 10:56:32.170240 148 log.go:172] (0xc00003a160) Data frame received for 3\nI0611 10:56:32.170246 148 log.go:172] (0xc000a9e000) (3) Data frame handling\nI0611 10:56:32.171919 148 log.go:172] (0xc00003a160) Data frame received for 1\nI0611 10:56:32.171945 148 log.go:172] (0xc000703540) (1) Data frame handling\nI0611 10:56:32.171973 148 log.go:172] (0xc000703540) (1) Data frame sent\nI0611 10:56:32.172017 148 log.go:172] (0xc00003a160) (0xc000703540) Stream removed, broadcasting: 1\nI0611 10:56:32.172046 148 log.go:172] (0xc00003a160) Go away received\nI0611 10:56:32.172395 148 log.go:172] (0xc00003a160) (0xc000703540) Stream removed, broadcasting: 1\nI0611 10:56:32.172420 148 log.go:172] (0xc00003a160) (0xc000a9e000) Stream removed, broadcasting: 3\nI0611 10:56:32.172435 148 log.go:172] (0xc00003a160) (0xc000a62000) Stream removed, broadcasting: 5\n" Jun 11 10:56:32.178: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 11 10:56:32.178: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 11 10:56:32.189: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 11 10:56:32.189: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 11 10:56:32.189: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 11 10:56:32.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 11 10:56:32.385: INFO: stderr: "I0611 10:56:32.306707 168 log.go:172] (0xc000b43600) (0xc0009765a0) Create stream\nI0611 10:56:32.306772 168 log.go:172] (0xc000b43600) (0xc0009765a0) Stream added, broadcasting: 1\nI0611 10:56:32.310174 168 log.go:172] (0xc000b43600) Reply frame received for 1\nI0611 10:56:32.310205 168 log.go:172] (0xc000b43600) (0xc000b803c0) Create stream\nI0611 10:56:32.310215 168 log.go:172] (0xc000b43600) (0xc000b803c0) Stream added, broadcasting: 3\nI0611 10:56:32.311019 168 log.go:172] (0xc000b43600) Reply frame received for 3\nI0611 10:56:32.311059 168 log.go:172] (0xc000b43600) (0xc000976640) Create stream\nI0611 10:56:32.311072 168 log.go:172] (0xc000b43600) (0xc000976640) Stream added, broadcasting: 5\nI0611 10:56:32.311899 168 log.go:172] (0xc000b43600) Reply frame received for 5\nI0611 10:56:32.378444 168 log.go:172] (0xc000b43600) Data frame received for 5\nI0611 10:56:32.378468 168 log.go:172] (0xc000976640) (5) Data frame handling\nI0611 10:56:32.378476 168 log.go:172] (0xc000976640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 10:56:32.378496 168 log.go:172] (0xc000b43600) Data frame received for 3\nI0611 10:56:32.378523 168 log.go:172] (0xc000b803c0) (3) Data frame handling\nI0611 10:56:32.378536 168 log.go:172] (0xc000b43600) Data frame received for 5\nI0611 10:56:32.378555 168 log.go:172] (0xc000976640) (5) Data frame handling\nI0611 10:56:32.378573 168 log.go:172] (0xc000b803c0) (3) Data frame sent\nI0611 10:56:32.378586 168 log.go:172] (0xc000b43600) Data frame received for 3\nI0611 10:56:32.378592 168 log.go:172] (0xc000b803c0) (3) Data frame handling\nI0611 10:56:32.379793 168 log.go:172] (0xc000b43600) Data frame received for 1\nI0611 10:56:32.379819 168 log.go:172] (0xc0009765a0) (1) Data frame handling\nI0611 10:56:32.379849 168 log.go:172] (0xc0009765a0) (1) Data frame sent\nI0611 10:56:32.379870 168 log.go:172] (0xc000b43600) (0xc0009765a0) Stream removed, broadcasting: 1\nI0611 10:56:32.379892 168 log.go:172] (0xc000b43600) Go away received\nI0611 10:56:32.380215 168 log.go:172] (0xc000b43600) (0xc0009765a0) Stream removed, broadcasting: 1\nI0611 10:56:32.380235 168 log.go:172] (0xc000b43600) (0xc000b803c0) Stream removed, broadcasting: 3\nI0611 10:56:32.380243 168 log.go:172] (0xc000b43600) (0xc000976640) Stream removed, broadcasting: 5\n" Jun 11 10:56:32.385: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 11 10:56:32.385: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 11 10:56:32.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 11 10:56:32.643: INFO: stderr: "I0611 10:56:32.518757 188 log.go:172] (0xc0000e8e70) (0xc0009cc140) Create stream\nI0611 10:56:32.518828 188 log.go:172] (0xc0000e8e70) (0xc0009cc140) Stream added, broadcasting: 1\nI0611 10:56:32.521993 188 log.go:172] (0xc0000e8e70) Reply frame received for 1\nI0611 10:56:32.522049 188 log.go:172] (0xc0000e8e70) (0xc0006e5220) Create stream\nI0611 10:56:32.522062 188 log.go:172] (0xc0000e8e70) (0xc0006e5220) Stream added, broadcasting: 3\nI0611 10:56:32.523289 188 log.go:172] (0xc0000e8e70) Reply frame received for 3\nI0611 10:56:32.523326 188 log.go:172] (0xc0000e8e70) (0xc0006e5400) Create stream\nI0611 10:56:32.523338 188 log.go:172] (0xc0000e8e70) (0xc0006e5400) Stream added, broadcasting: 5\nI0611 10:56:32.524317 188 log.go:172] (0xc0000e8e70) Reply frame received for 5\nI0611 10:56:32.597876 188 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0611 10:56:32.597903 188 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0611 10:56:32.597915 188 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 10:56:32.634893 188 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0611 10:56:32.634929 188 log.go:172] (0xc0006e5220) (3) Data frame handling\nI0611 10:56:32.634945 188 log.go:172] (0xc0006e5220) (3) Data frame sent\nI0611 10:56:32.634957 188 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0611 10:56:32.634970 188 log.go:172] (0xc0006e5220) (3) Data frame handling\nI0611 10:56:32.634986 188 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0611 10:56:32.634993 188 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0611 10:56:32.636850 188 log.go:172] (0xc0000e8e70) Data frame received for 1\nI0611 10:56:32.636886 188 log.go:172] (0xc0009cc140) (1) Data frame handling\nI0611 10:56:32.636901 188 log.go:172] (0xc0009cc140) (1) Data frame sent\nI0611 10:56:32.636921 188 log.go:172] (0xc0000e8e70) (0xc0009cc140) Stream removed, broadcasting: 1\nI0611 10:56:32.636939 188 log.go:172] (0xc0000e8e70) Go away received\nI0611 10:56:32.637816 188 log.go:172] (0xc0000e8e70) (0xc0009cc140) Stream removed, broadcasting: 1\nI0611 10:56:32.637843 188 log.go:172] (0xc0000e8e70) (0xc0006e5220) Stream removed, broadcasting: 3\nI0611 10:56:32.637856 188 log.go:172] (0xc0000e8e70) (0xc0006e5400) Stream removed, broadcasting: 5\n" Jun 11 10:56:32.644: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 11 10:56:32.644: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 11 10:56:32.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 11 10:56:32.912: INFO: stderr: "I0611 10:56:32.811541 210 log.go:172] (0xc000aa08f0) (0xc00093c0a0) Create stream\nI0611 10:56:32.811598 210 log.go:172] (0xc000aa08f0) (0xc00093c0a0) Stream added, broadcasting: 1\nI0611 10:56:32.814514 210 log.go:172] (0xc000aa08f0) Reply frame received for 1\nI0611 10:56:32.814571 210 log.go:172] (0xc000aa08f0) (0xc000a3e000) Create stream\nI0611 10:56:32.814588 210 log.go:172] (0xc000aa08f0) (0xc000a3e000) Stream added, broadcasting: 3\nI0611 10:56:32.815611 210 log.go:172] (0xc000aa08f0) Reply frame received for 3\nI0611 10:56:32.815660 210 log.go:172] (0xc000aa08f0) (0xc0006ef180) Create stream\nI0611 10:56:32.815682 210 log.go:172] (0xc000aa08f0) (0xc0006ef180) Stream added, broadcasting: 5\nI0611 10:56:32.816629 210 log.go:172] (0xc000aa08f0) Reply frame received for 5\nI0611 10:56:32.877563 210 log.go:172] (0xc000aa08f0) Data frame received for 5\nI0611 10:56:32.877590 210 log.go:172] (0xc0006ef180) (5) Data frame handling\nI0611 10:56:32.877600 210 log.go:172] (0xc0006ef180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 10:56:32.904961 210 log.go:172] (0xc000aa08f0) Data frame received for 3\nI0611 10:56:32.904981 210 log.go:172] (0xc000a3e000) (3) Data frame handling\nI0611 10:56:32.905221 210 log.go:172] (0xc000aa08f0) Data frame received for 5\nI0611 10:56:32.905271 210 log.go:172] (0xc0006ef180) (5) Data frame handling\nI0611 10:56:32.905310 210 log.go:172] (0xc000a3e000) (3) Data frame sent\nI0611 10:56:32.905662 210 log.go:172] (0xc000aa08f0) Data frame received for 3\nI0611 10:56:32.905673 210 log.go:172] (0xc000a3e000) (3) Data frame handling\nI0611 10:56:32.907523 210 log.go:172] (0xc000aa08f0) Data frame received for 1\nI0611 10:56:32.907568 210 log.go:172] (0xc00093c0a0) (1) Data frame handling\nI0611 10:56:32.907606 210 log.go:172] (0xc00093c0a0) (1) Data frame sent\nI0611 10:56:32.907786 210 log.go:172] (0xc000aa08f0) (0xc00093c0a0) Stream removed, broadcasting: 1\nI0611 10:56:32.907834 210 log.go:172] (0xc000aa08f0) Go away received\nI0611 10:56:32.908099 210 log.go:172] (0xc000aa08f0) (0xc00093c0a0) Stream removed, broadcasting: 1\nI0611 10:56:32.908119 210 log.go:172] (0xc000aa08f0) (0xc000a3e000) Stream removed, broadcasting: 3\nI0611 10:56:32.908129 210 log.go:172] (0xc000aa08f0) (0xc0006ef180) Stream removed, broadcasting: 5\n" Jun 11 10:56:32.912: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 11 10:56:32.912: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 11 10:56:32.912: INFO: Waiting for statefulset status.replicas updated to 0 Jun 11 10:56:32.916: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 11 10:56:42.924: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 11 10:56:42.924: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 11 10:56:42.924: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 11 10:56:42.953: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:42.954: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:42.954: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:42.954: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:42.954: INFO: Jun 11 10:56:42.954: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:44.393: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:44.393: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:44.393: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:44.393: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:44.393: INFO: Jun 11 10:56:44.393: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:45.398: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:45.398: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:45.398: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:45.398: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:45.412: INFO: Jun 11 10:56:45.412: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:46.435: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:46.435: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:46.435: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:46.435: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:46.435: INFO: Jun 11 10:56:46.435: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:47.439: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:47.439: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:47.439: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:47.439: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:47.439: INFO: Jun 11 10:56:47.439: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:48.444: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:48.444: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:48.444: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:48.444: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:48.444: INFO: Jun 11 10:56:48.444: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:49.450: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:49.450: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:49.450: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:49.450: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:49.450: INFO: Jun 11 10:56:49.450: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:50.455: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:50.455: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:50.455: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:50.455: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:50.455: INFO: Jun 11 10:56:50.455: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:51.460: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:51.460: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:51.460: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:51.460: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:51.460: INFO: Jun 11 10:56:51.460: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 11 10:56:52.464: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 10:56:52.464: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:00 +0000 UTC }] Jun 11 10:56:52.464: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:52.464: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-11 10:56:20 +0000 UTC }] Jun 11 10:56:52.464: INFO: Jun 11 10:56:52.464: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-997 Jun 11 10:56:53.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:56:53.619: INFO: rc: 1 Jun 11 10:56:53.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jun 11 10:57:03.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:03.720: INFO: rc: 1 Jun 11 10:57:03.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:57:13.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:13.849: INFO: rc: 1 Jun 11 10:57:13.849: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:57:23.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:23.950: INFO: rc: 1 Jun 11 10:57:23.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:57:33.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:34.069: INFO: rc: 1 Jun 11 10:57:34.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:57:44.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:44.170: INFO: rc: 1 Jun 11 10:57:44.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:57:54.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:57:54.273: INFO: rc: 1 Jun 11 10:57:54.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:04.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:04.363: INFO: rc: 1 Jun 11 10:58:04.363: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:14.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:14.478: INFO: rc: 1 Jun 11 10:58:14.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:24.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:24.578: INFO: rc: 1 Jun 11 10:58:24.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:34.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:34.698: INFO: rc: 1 Jun 11 10:58:34.698: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:44.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:44.810: INFO: rc: 1 Jun 11 10:58:44.810: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:58:54.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:58:54.929: INFO: rc: 1 Jun 11 10:58:54.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:04.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:05.024: INFO: rc: 1 Jun 11 10:59:05.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:15.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:15.121: INFO: rc: 1 Jun 11 10:59:15.121: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:25.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:25.222: INFO: rc: 1 Jun 11 10:59:25.222: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:35.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:35.312: INFO: rc: 1 Jun 11 10:59:35.312: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:45.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:45.413: INFO: rc: 1 Jun 11 10:59:45.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 10:59:55.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 10:59:55.520: INFO: rc: 1 Jun 11 10:59:55.520: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:05.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:05.626: INFO: rc: 1 Jun 11 11:00:05.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:15.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:15.740: INFO: rc: 1 Jun 11 11:00:15.740: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:25.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:25.856: INFO: rc: 1 Jun 11 11:00:25.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:35.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:35.946: INFO: rc: 1 Jun 11 11:00:35.946: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:45.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:46.064: INFO: rc: 1 Jun 11 11:00:46.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:00:56.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:00:56.166: INFO: rc: 1 Jun 11 11:00:56.166: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:06.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:06.265: INFO: rc: 1 Jun 11 11:01:06.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:16.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:16.358: INFO: rc: 1 Jun 11 11:01:16.358: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:26.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:26.458: INFO: rc: 1 Jun 11 11:01:26.458: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:36.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:36.542: INFO: rc: 1 Jun 11 11:01:36.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:46.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:46.647: INFO: rc: 1 Jun 11 11:01:46.647: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 11 11:01:56.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-997 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 11 11:01:56.791: INFO: rc: 1 Jun 11 11:01:56.791: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Jun 11 11:01:56.791: INFO: Scaling statefulset ss to 0 Jun 11 11:01:56.798: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jun 11 11:01:56.799: INFO: Deleting all statefulset in ns statefulset-997 Jun 11 11:01:56.801: INFO: Scaling statefulset ss to 0 Jun 11 11:01:56.807: INFO: Waiting for statefulset status.replicas updated to 0 Jun 11 11:01:56.809: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:01:56.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-997" for this suite. • [SLOW TEST:356.348 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":16,"skipped":218,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:01:56.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 11 11:01:57.067: INFO: Waiting up to 5m0s for pod "pod-901c3dae-c430-4c95-8139-998312342752" in namespace "emptydir-1546" to be "Succeeded or Failed" Jun 11 11:01:57.089: INFO: Pod "pod-901c3dae-c430-4c95-8139-998312342752": Phase="Pending", Reason="", readiness=false. Elapsed: 22.017733ms Jun 11 11:01:59.092: INFO: Pod "pod-901c3dae-c430-4c95-8139-998312342752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025350847s Jun 11 11:02:01.095: INFO: Pod "pod-901c3dae-c430-4c95-8139-998312342752": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028664829s Jun 11 11:02:03.102: INFO: Pod "pod-901c3dae-c430-4c95-8139-998312342752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03498957s STEP: Saw pod success Jun 11 11:02:03.102: INFO: Pod "pod-901c3dae-c430-4c95-8139-998312342752" satisfied condition "Succeeded or Failed" Jun 11 11:02:03.113: INFO: Trying to get logs from node kali-worker2 pod pod-901c3dae-c430-4c95-8139-998312342752 container test-container: STEP: delete the pod Jun 11 11:02:03.161: INFO: Waiting for pod pod-901c3dae-c430-4c95-8139-998312342752 to disappear Jun 11 11:02:03.173: INFO: Pod pod-901c3dae-c430-4c95-8139-998312342752 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:02:03.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1546" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:02:03.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 11 11:02:08.477: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:02:08.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1579" for this suite. • [SLOW TEST:5.345 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":253,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:02:08.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-3d7d88ac-2233-4f62-ba70-6ef84aedd696 STEP: Creating secret with name s-test-opt-upd-795518c7-c602-4731-824d-c174fed212b8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3d7d88ac-2233-4f62-ba70-6ef84aedd696 STEP: Updating secret s-test-opt-upd-795518c7-c602-4731-824d-c174fed212b8 STEP: Creating secret with name s-test-opt-create-3442e638-ac38-4171-9ab2-003df987748e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:03:24.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-292" for this suite. • [SLOW TEST:75.890 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":259,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:03:24.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 11 11:03:25.337: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 11 11:03:27.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 11 11:03:29.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470205, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 11 11:03:32.952: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:03:32.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:03:36.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7349" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:12.509 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":20,"skipped":274,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:03:36.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:03:37.127: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 11 11:03:37.221: INFO: Number of nodes with available pods: 0 Jun 11 11:03:37.221: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 11 11:03:37.665: INFO: Number of nodes with available pods: 0 Jun 11 11:03:37.665: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:38.668: INFO: Number of nodes with available pods: 0 Jun 11 11:03:38.669: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:39.670: INFO: Number of nodes with available pods: 0 Jun 11 11:03:39.670: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:40.713: INFO: Number of nodes with available pods: 0 Jun 11 11:03:40.713: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:41.702: INFO: Number of nodes with available pods: 1 Jun 11 11:03:41.702: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 11 11:03:41.738: INFO: Number of nodes with available pods: 1 Jun 11 11:03:41.738: INFO: Number of running nodes: 0, number of available pods: 1 Jun 11 11:03:42.742: INFO: Number of nodes with available pods: 0 Jun 11 11:03:42.742: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 11 11:03:42.764: INFO: Number of nodes with available pods: 0 Jun 11 11:03:42.764: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:43.769: INFO: Number of nodes with available pods: 0 Jun 11 11:03:43.769: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:44.779: INFO: Number of nodes with available pods: 0 Jun 11 11:03:44.779: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:45.785: INFO: Number of nodes with available pods: 0 Jun 11 11:03:45.785: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:46.768: INFO: Number of nodes with available pods: 0 Jun 11 11:03:46.769: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:47.823: INFO: Number of nodes with available pods: 0 Jun 11 11:03:47.823: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:48.768: INFO: Number of nodes with available pods: 0 Jun 11 11:03:48.768: INFO: Node kali-worker2 is running more than one daemon pod Jun 11 11:03:49.768: INFO: Number of nodes with available pods: 1 Jun 11 11:03:49.768: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8311, will wait for the garbage collector to delete the pods Jun 11 11:03:49.836: INFO: Deleting DaemonSet.extensions daemon-set took: 9.422694ms Jun 11 11:03:49.936: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.316356ms Jun 11 11:04:03.440: INFO: Number of nodes with available pods: 0 Jun 11 11:04:03.440: INFO: Number of running nodes: 0, number of available pods: 0 Jun 11 11:04:03.445: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8311/daemonsets","resourceVersion":"11506791"},"items":null} Jun 11 11:04:03.447: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8311/pods","resourceVersion":"11506791"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:04:03.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8311" for this suite. • [SLOW TEST:26.561 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":21,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:04:03.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:04:03.583: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:04:07.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-101" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":314,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:04:07.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qmtd STEP: Creating a pod to test atomic-volume-subpath Jun 11 11:04:08.836: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qmtd" in namespace "subpath-3688" to be "Succeeded or Failed" Jun 11 11:04:08.921: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 85.577383ms Jun 11 11:04:11.024: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188708397s Jun 11 11:04:13.030: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 4.194507464s Jun 11 11:04:15.035: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 6.198784573s Jun 11 11:04:17.039: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 8.203434445s Jun 11 11:04:19.044: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 10.208167016s Jun 11 11:04:21.048: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 12.212510777s Jun 11 11:04:23.053: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.216797146s Jun 11 11:04:25.057: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 16.221074267s Jun 11 11:04:27.061: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 18.2254618s Jun 11 11:04:29.066: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 20.230530945s Jun 11 11:04:31.071: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Running", Reason="", readiness=true. Elapsed: 22.235105816s Jun 11 11:04:33.075: INFO: Pod "pod-subpath-test-configmap-qmtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.238985968s STEP: Saw pod success Jun 11 11:04:33.075: INFO: Pod "pod-subpath-test-configmap-qmtd" satisfied condition "Succeeded or Failed" Jun 11 11:04:33.077: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-qmtd container test-container-subpath-configmap-qmtd: STEP: delete the pod Jun 11 11:04:33.296: INFO: Waiting for pod pod-subpath-test-configmap-qmtd to disappear Jun 11 11:04:33.304: INFO: Pod pod-subpath-test-configmap-qmtd no longer exists STEP: Deleting pod pod-subpath-test-configmap-qmtd Jun 11 11:04:33.304: INFO: Deleting pod "pod-subpath-test-configmap-qmtd" in namespace "subpath-3688" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:04:33.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3688" for this suite. • [SLOW TEST:25.618 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":23,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:04:33.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 11 11:04:33.505: INFO: Waiting up to 5m0s for pod "pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9" in namespace "emptydir-6472" to be "Succeeded or Failed" Jun 11 11:04:33.508: INFO: Pod "pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.942413ms Jun 11 11:04:35.534: INFO: Pod "pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029170709s Jun 11 11:04:37.537: INFO: Pod "pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032378692s STEP: Saw pod success Jun 11 11:04:37.537: INFO: Pod "pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9" satisfied condition "Succeeded or Failed" Jun 11 11:04:37.540: INFO: Trying to get logs from node kali-worker2 pod pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9 container test-container: STEP: delete the pod Jun 11 11:04:37.591: INFO: Waiting for pod pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9 to disappear Jun 11 11:04:37.629: INFO: Pod pod-4f2ef614-5180-4b28-8fa2-115a1e5423d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:04:37.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6472" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:04:37.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4832 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4832;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4832 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4832;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4832.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4832.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4832.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4832.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4832.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4832.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.219.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.219.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.219.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.219.121_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4832 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4832;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4832 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4832;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4832.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4832.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4832.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4832.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4832.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4832.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4832.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4832.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.219.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.219.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.219.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.219.121_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 11 11:04:43.999: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.044: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.047: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.062: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.065: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.085: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.087: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.090: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.094: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.097: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:44.118: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:04:49.123: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.126: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.130: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.136: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.139: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.171: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.174: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.176: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.182: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.193: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:49.217: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:04:54.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.190: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.194: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.211: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.214: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.217: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.219: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.222: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.225: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.228: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.230: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:54.244: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:04:59.124: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.127: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.131: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.135: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.137: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.140: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.145: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.162: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.165: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.167: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.172: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.177: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.180: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:04:59.196: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:05:04.123: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.126: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.132: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.142: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.161: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.163: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.167: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.172: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.178: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:04.202: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:05:09.124: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.127: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.131: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.137: INFO: Unable to read wheezy_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.148: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.175: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.177: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.180: INFO: Unable to read jessie_udp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832 from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.186: INFO: Unable to read jessie_udp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.192: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.194: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc from pod dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a: the server could not find the requested resource (get pods dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a) Jun 11 11:05:09.208: INFO: Lookups using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4832 wheezy_tcp@dns-test-service.dns-4832 wheezy_udp@dns-test-service.dns-4832.svc wheezy_tcp@dns-test-service.dns-4832.svc wheezy_udp@_http._tcp.dns-test-service.dns-4832.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4832.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4832 jessie_tcp@dns-test-service.dns-4832 jessie_udp@dns-test-service.dns-4832.svc jessie_tcp@dns-test-service.dns-4832.svc jessie_udp@_http._tcp.dns-test-service.dns-4832.svc jessie_tcp@_http._tcp.dns-test-service.dns-4832.svc] Jun 11 11:05:14.812: INFO: DNS probes using dns-4832/dns-test-cbbbba88-2797-4b65-8b95-0b04ed918d4a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:15.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4832" for this suite. • [SLOW TEST:37.965 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":25,"skipped":407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:15.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 11 11:05:16.447: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 11 11:05:18.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 11 11:05:20.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470316, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 11 11:05:23.562: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:05:23.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9748-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:24.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1872" for this suite. STEP: Destroying namespace "webhook-1872-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.169 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":26,"skipped":443,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:25.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:05:26.387: INFO: Creating deployment "test-recreate-deployment" Jun 11 11:05:26.423: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 11 11:05:26.535: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 11 11:05:28.907: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 11 11:05:28.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 11 11:05:30.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470326, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 11 11:05:32.914: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 11 11:05:32.924: INFO: Updating deployment test-recreate-deployment Jun 11 11:05:32.924: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jun 11 11:05:33.697: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2017 /apis/apps/v1/namespaces/deployment-2017/deployments/test-recreate-deployment 1b8578b5-944d-4e48-afe5-0b9e6758a8c6 11507334 2 2020-06-11 11:05:26 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-11 11:05:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:05:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002943548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-11 11:05:33 +0000 UTC,LastTransitionTime:2020-06-11 11:05:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-06-11 11:05:33 +0000 UTC,LastTransitionTime:2020-06-11 11:05:26 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 11 11:05:33.705: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-2017 /apis/apps/v1/namespaces/deployment-2017/replicasets/test-recreate-deployment-d5667d9c7 692c5490-980a-4ff3-b184-1a12ec6f7aa8 11507332 1 2020-06-11 11:05:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1b8578b5-944d-4e48-afe5-0b9e6758a8c6 0xc002b5c1c0 0xc002b5c1c1}] [] [{kube-controller-manager Update apps/v1 2020-06-11 11:05:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 98 56 53 55 56 98 53 45 57 52 52 100 45 52 101 52 56 45 97 102 101 53 45 48 98 57 101 54 55 53 56 97 56 99 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b5c238 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 11 11:05:33.706: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 11 11:05:33.706: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-2017 /apis/apps/v1/namespaces/deployment-2017/replicasets/test-recreate-deployment-74d98b5f7c f456cf1e-aa16-483d-8c3f-5e6baf3f828e 11507322 2 2020-06-11 11:05:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1b8578b5-944d-4e48-afe5-0b9e6758a8c6 0xc002b5c0c7 0xc002b5c0c8}] [] [{kube-controller-manager Update apps/v1 2020-06-11 11:05:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 98 56 53 55 56 98 53 45 57 52 52 100 45 52 101 52 56 45 97 102 101 53 45 48 98 57 101 54 55 53 56 97 56 99 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b5c158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 11 11:05:33.709: INFO: Pod "test-recreate-deployment-d5667d9c7-vdwbx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-vdwbx test-recreate-deployment-d5667d9c7- deployment-2017 /api/v1/namespaces/deployment-2017/pods/test-recreate-deployment-d5667d9c7-vdwbx 9615d07a-4b27-43dc-942e-8d32cbab4e7e 11507335 0 2020-06-11 11:05:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 692c5490-980a-4ff3-b184-1a12ec6f7aa8 0xc002943960 0xc002943961}] [] [{kube-controller-manager Update v1 2020-06-11 11:05:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 50 99 53 52 57 48 45 57 56 48 97 45 52 102 102 51 45 98 49 56 52 45 49 97 49 50 101 99 54 102 55 97 97 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:05:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zlx9r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zlx9r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zlx9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:05:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:05:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:05:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:05:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-06-11 11:05:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:33.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2017" for this suite. • [SLOW TEST:7.945 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":27,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:33.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 11 11:05:34.333: INFO: Waiting up to 5m0s for pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9" in namespace "emptydir-6008" to be "Succeeded or Failed" Jun 11 11:05:34.361: INFO: Pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.355838ms Jun 11 11:05:36.387: INFO: Pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05439681s Jun 11 11:05:38.392: INFO: Pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9": Phase="Running", Reason="", readiness=true. Elapsed: 4.059066668s Jun 11 11:05:40.403: INFO: Pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069544469s STEP: Saw pod success Jun 11 11:05:40.403: INFO: Pod "pod-fc13a094-d78d-4220-b043-ef0542788ee9" satisfied condition "Succeeded or Failed" Jun 11 11:05:40.406: INFO: Trying to get logs from node kali-worker2 pod pod-fc13a094-d78d-4220-b043-ef0542788ee9 container test-container: STEP: delete the pod Jun 11 11:05:40.447: INFO: Waiting for pod pod-fc13a094-d78d-4220-b043-ef0542788ee9 to disappear Jun 11 11:05:40.469: INFO: Pod pod-fc13a094-d78d-4220-b043-ef0542788ee9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:40.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6008" for this suite. • [SLOW TEST:6.759 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":497,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:40.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:41.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7840" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":29,"skipped":500,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:41.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:05:41.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version' Jun 11 11:05:41.342: INFO: stderr: "" Jun 11 11:05:41.342: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T19:09:43Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:05:41.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8870" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":30,"skipped":503,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:05:41.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-5x2s STEP: Creating a pod to test atomic-volume-subpath Jun 11 11:05:41.435: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5x2s" in namespace "subpath-8981" to be "Succeeded or Failed" Jun 11 11:05:41.450: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.566941ms Jun 11 11:05:44.355: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.920147957s Jun 11 11:05:46.370: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.934815903s Jun 11 11:05:48.374: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 6.938541868s Jun 11 11:05:50.378: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 8.94304186s Jun 11 11:05:52.382: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 10.947031797s Jun 11 11:05:54.387: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 12.951740601s Jun 11 11:05:56.391: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 14.956340026s Jun 11 11:05:58.396: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 16.960821129s Jun 11 11:06:00.400: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 18.96492454s Jun 11 11:06:02.475: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 21.039862985s Jun 11 11:06:04.479: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 23.044320343s Jun 11 11:06:06.488: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Running", Reason="", readiness=true. Elapsed: 25.053008467s Jun 11 11:06:08.493: INFO: Pod "pod-subpath-test-secret-5x2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.058379507s STEP: Saw pod success Jun 11 11:06:08.493: INFO: Pod "pod-subpath-test-secret-5x2s" satisfied condition "Succeeded or Failed" Jun 11 11:06:08.497: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-5x2s container test-container-subpath-secret-5x2s: STEP: delete the pod Jun 11 11:06:08.564: INFO: Waiting for pod pod-subpath-test-secret-5x2s to disappear Jun 11 11:06:08.578: INFO: Pod pod-subpath-test-secret-5x2s no longer exists STEP: Deleting pod pod-subpath-test-secret-5x2s Jun 11 11:06:08.578: INFO: Deleting pod "pod-subpath-test-secret-5x2s" in namespace "subpath-8981" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:08.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8981" for this suite. • [SLOW TEST:27.239 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":31,"skipped":516,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:08.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:06:08.648: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 11 11:06:13.679: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 11 11:06:13.679: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jun 11 11:06:14.101: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-663 /apis/apps/v1/namespaces/deployment-663/deployments/test-cleanup-deployment 460e0618-3f50-4c38-a7b0-979378eda543 11507565 1 2020-06-11 11:06:13 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-06-11 11:06:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d44ca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 11 11:06:14.295: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-663 /apis/apps/v1/namespaces/deployment-663/replicasets/test-cleanup-deployment-b4867b47f 9c71c44f-5dc9-4590-9079-7d49f5372400 11507568 1 2020-06-11 11:06:14 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 460e0618-3f50-4c38-a7b0-979378eda543 0xc001d45280 0xc001d45281}] [] [{kube-controller-manager Update apps/v1 2020-06-11 11:06:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 54 48 101 48 54 49 56 45 51 102 53 48 45 52 99 51 56 45 97 55 98 48 45 57 55 57 51 55 56 101 100 97 53 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d452f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 11 11:06:14.295: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 11 11:06:14.295: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-663 /apis/apps/v1/namespaces/deployment-663/replicasets/test-cleanup-controller f001eb58-efb7-4286-b120-94f7cfeb0825 11507567 1 2020-06-11 11:06:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 460e0618-3f50-4c38-a7b0-979378eda543 0xc001d45177 0xc001d45178}] [] [{e2e.test Update apps/v1 2020-06-11 11:06:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:06:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 54 48 101 48 54 49 56 45 51 102 53 48 45 52 99 51 56 45 97 55 98 48 45 57 55 57 51 55 56 101 100 97 53 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001d45218 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 11 11:06:14.360: INFO: Pod "test-cleanup-controller-nttjt" is available: &Pod{ObjectMeta:{test-cleanup-controller-nttjt test-cleanup-controller- deployment-663 /api/v1/namespaces/deployment-663/pods/test-cleanup-controller-nttjt 9cb71b84-3dae-4759-bc5a-9ecd5de6c25f 11507558 0 2020-06-11 11:06:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f001eb58-efb7-4286-b120-94f7cfeb0825 0xc000f07027 0xc000f07028}] [] [{kube-controller-manager Update v1 2020-06-11 11:06:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 48 48 49 101 98 53 56 45 101 102 98 55 45 52 50 56 54 45 98 49 50 48 45 57 52 102 55 99 102 101 98 48 56 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:06:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l6mt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l6mt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l6mt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:06:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:06:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.198,StartTime:2020-06-11 11:06:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:06:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://04c34d0d0ca36a442b748b0a4ddf26bb8e0a0610c1d2a6e9e571d1c0de3f7d1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 11 11:06:14.360: INFO: Pod "test-cleanup-deployment-b4867b47f-tz9j7" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-tz9j7 test-cleanup-deployment-b4867b47f- deployment-663 /api/v1/namespaces/deployment-663/pods/test-cleanup-deployment-b4867b47f-tz9j7 130b10c9-eede-4ff2-a103-1a0f5cbd163a 11507573 0 2020-06-11 11:06:14 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 9c71c44f-5dc9-4590-9079-7d49f5372400 0xc000f071f0 0xc000f071f1}] [] [{kube-controller-manager Update v1 2020-06-11 11:06:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 99 55 49 99 52 52 102 45 53 100 99 57 45 52 53 57 48 45 57 48 55 57 45 55 100 52 57 102 53 51 55 50 52 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l6mt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l6mt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l6mt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:06:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-663" for this suite. • [SLOW TEST:5.858 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":32,"skipped":528,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:14.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:06:14.510: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 11 11:06:18.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 create -f -' Jun 11 11:06:25.778: INFO: stderr: "" Jun 11 11:06:25.778: INFO: stdout: "e2e-test-crd-publish-openapi-389-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 11 11:06:25.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 delete e2e-test-crd-publish-openapi-389-crds test-cr' Jun 11 11:06:25.889: INFO: stderr: "" Jun 11 11:06:25.889: INFO: stdout: "e2e-test-crd-publish-openapi-389-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 11 11:06:25.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 apply -f -' Jun 11 11:06:29.391: INFO: stderr: "" Jun 11 11:06:29.391: INFO: stdout: "e2e-test-crd-publish-openapi-389-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 11 11:06:29.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 delete e2e-test-crd-publish-openapi-389-crds test-cr' Jun 11 11:06:29.662: INFO: stderr: "" Jun 11 11:06:29.662: INFO: stdout: "e2e-test-crd-publish-openapi-389-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 11 11:06:29.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-389-crds' Jun 11 11:06:32.739: INFO: stderr: "" Jun 11 11:06:32.739: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-389-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:35.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-16" for this suite. • [SLOW TEST:20.667 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":33,"skipped":537,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:35.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 11:06:35.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf" in namespace "projected-6763" to be "Succeeded or Failed" Jun 11 11:06:35.230: INFO: Pod "downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 55.166647ms Jun 11 11:06:37.234: INFO: Pod "downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059817727s Jun 11 11:06:39.238: INFO: Pod "downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063167338s STEP: Saw pod success Jun 11 11:06:39.238: INFO: Pod "downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf" satisfied condition "Succeeded or Failed" Jun 11 11:06:39.240: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf container client-container: STEP: delete the pod Jun 11 11:06:39.286: INFO: Waiting for pod downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf to disappear Jun 11 11:06:39.298: INFO: Pod downwardapi-volume-bb5c535b-b0f2-4dc9-8fa5-ba99f191bfdf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:39.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6763" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":542,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:39.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Jun 11 11:06:39.978: INFO: created pod pod-service-account-defaultsa Jun 11 11:06:39.978: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 11 11:06:40.004: INFO: created pod pod-service-account-mountsa Jun 11 11:06:40.004: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 11 11:06:40.018: INFO: created pod pod-service-account-nomountsa Jun 11 11:06:40.018: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 11 11:06:40.063: INFO: created pod pod-service-account-defaultsa-mountspec Jun 11 11:06:40.063: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 11 11:06:40.067: INFO: created pod pod-service-account-mountsa-mountspec Jun 11 11:06:40.067: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 11 11:06:40.078: INFO: created pod pod-service-account-nomountsa-mountspec Jun 11 11:06:40.078: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 11 11:06:40.114: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 11 11:06:40.114: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 11 11:06:40.138: INFO: created pod pod-service-account-mountsa-nomountspec Jun 11 11:06:40.138: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 11 11:06:40.219: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 11 11:06:40.219: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:40.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8030" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":35,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:40.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 11:06:40.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216" in namespace "downward-api-994" to be "Succeeded or Failed" Jun 11 11:06:40.520: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 42.177766ms Jun 11 11:06:42.643: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165322641s Jun 11 11:06:45.200: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722703565s Jun 11 11:06:48.515: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037105845s Jun 11 11:06:50.641: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 10.16349784s Jun 11 11:06:52.709: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231761635s Jun 11 11:06:55.176: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Running", Reason="", readiness=true. Elapsed: 14.698700562s Jun 11 11:06:57.296: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Running", Reason="", readiness=true. Elapsed: 16.817962914s Jun 11 11:06:59.313: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.835244152s STEP: Saw pod success Jun 11 11:06:59.313: INFO: Pod "downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216" satisfied condition "Succeeded or Failed" Jun 11 11:06:59.315: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216 container client-container: STEP: delete the pod Jun 11 11:06:59.335: INFO: Waiting for pod downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216 to disappear Jun 11 11:06:59.340: INFO: Pod downwardapi-volume-682e8dc7-e8e1-4d00-8fcd-e48a5b9f8216 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:06:59.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-994" for this suite. • [SLOW TEST:18.994 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":596,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:06:59.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-0fd5da3f-8e86-43a0-8a29-743349082387 STEP: Creating a pod to test consume secrets Jun 11 11:06:59.447: INFO: Waiting up to 5m0s for pod "pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a" in namespace "secrets-1544" to be "Succeeded or Failed" Jun 11 11:06:59.450: INFO: Pod "pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566965ms Jun 11 11:07:01.456: INFO: Pod "pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008507996s Jun 11 11:07:03.460: INFO: Pod "pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012827082s STEP: Saw pod success Jun 11 11:07:03.460: INFO: Pod "pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a" satisfied condition "Succeeded or Failed" Jun 11 11:07:03.463: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a container secret-volume-test: STEP: delete the pod Jun 11 11:07:03.852: INFO: Waiting for pod pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a to disappear Jun 11 11:07:03.906: INFO: Pod pod-secrets-71c34ae0-c5db-4fea-a6e8-64053e40cf3a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:07:03.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1544" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":598,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:07:03.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:07:04.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8291" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":600,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:07:04.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:07:12.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7793" for this suite. • [SLOW TEST:7.506 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":600,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:07:12.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:07:12.281: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 11 11:07:15.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6872 create -f -' Jun 11 11:07:28.762: INFO: stderr: "" Jun 11 11:07:28.762: INFO: stdout: "e2e-test-crd-publish-openapi-3912-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 11 11:07:28.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6872 delete e2e-test-crd-publish-openapi-3912-crds test-cr' Jun 11 11:07:28.914: INFO: stderr: "" Jun 11 11:07:28.914: INFO: stdout: "e2e-test-crd-publish-openapi-3912-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 11 11:07:28.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6872 apply -f -' Jun 11 11:07:32.380: INFO: stderr: "" Jun 11 11:07:32.380: INFO: stdout: "e2e-test-crd-publish-openapi-3912-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 11 11:07:32.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6872 delete e2e-test-crd-publish-openapi-3912-crds test-cr' Jun 11 11:07:32.479: INFO: stderr: "" Jun 11 11:07:32.479: INFO: stdout: "e2e-test-crd-publish-openapi-3912-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 11 11:07:32.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3912-crds' Jun 11 11:07:35.987: INFO: stderr: "" Jun 11 11:07:35.987: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3912-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:07:38.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6872" for this suite. • [SLOW TEST:26.688 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":40,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:07:38.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:08:39.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1418" for this suite. • [SLOW TEST:60.128 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":635,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:08:39.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 11:08:39.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88" in namespace "projected-47" to be "Succeeded or Failed" Jun 11 11:08:39.207: INFO: Pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 28.035424ms Jun 11 11:08:41.836: INFO: Pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657564515s Jun 11 11:08:43.842: INFO: Pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663088708s Jun 11 11:08:45.846: INFO: Pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.666735337s STEP: Saw pod success Jun 11 11:08:45.846: INFO: Pod "downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88" satisfied condition "Succeeded or Failed" Jun 11 11:08:45.848: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88 container client-container: STEP: delete the pod Jun 11 11:08:45.891: INFO: Waiting for pod downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88 to disappear Jun 11 11:08:45.925: INFO: Pod downwardapi-volume-bae7a6bf-fcaf-4070-8fd4-d2dd1400fa88 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:08:45.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-47" for this suite. • [SLOW TEST:6.873 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:08:45.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 11 11:08:46.666: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 11 11:08:48.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 11 11:08:50.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470526, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 11 11:08:53.942: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:08:54.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5995" for this suite. STEP: Destroying namespace "webhook-5995-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":43,"skipped":673,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:08:54.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Jun 11 11:08:54.668: INFO: Waiting up to 5m0s for pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093" in namespace "emptydir-1953" to be "Succeeded or Failed" Jun 11 11:08:54.699: INFO: Pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093": Phase="Pending", Reason="", readiness=false. Elapsed: 30.796473ms Jun 11 11:08:56.909: INFO: Pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240253111s Jun 11 11:08:58.913: INFO: Pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093": Phase="Running", Reason="", readiness=true. Elapsed: 4.24490242s Jun 11 11:09:00.990: INFO: Pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.32211411s STEP: Saw pod success Jun 11 11:09:00.991: INFO: Pod "pod-015a992a-5bf4-4c52-bdf6-19e84a426093" satisfied condition "Succeeded or Failed" Jun 11 11:09:01.429: INFO: Trying to get logs from node kali-worker pod pod-015a992a-5bf4-4c52-bdf6-19e84a426093 container test-container: STEP: delete the pod Jun 11 11:09:02.385: INFO: Waiting for pod pod-015a992a-5bf4-4c52-bdf6-19e84a426093 to disappear Jun 11 11:09:02.586: INFO: Pod pod-015a992a-5bf4-4c52-bdf6-19e84a426093 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:09:02.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1953" for this suite. • [SLOW TEST:8.111 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":676,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:09:02.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Jun 11 11:09:03.691: INFO: Waiting up to 5m0s for pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d" in namespace "containers-9873" to be "Succeeded or Failed" Jun 11 11:09:04.070: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Pending", Reason="", readiness=false. Elapsed: 378.497852ms Jun 11 11:09:06.262: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57056964s Jun 11 11:09:08.264: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572920857s Jun 11 11:09:10.567: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875705011s Jun 11 11:09:12.620: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Running", Reason="", readiness=true. Elapsed: 8.928992689s Jun 11 11:09:14.625: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.933274458s STEP: Saw pod success Jun 11 11:09:14.625: INFO: Pod "client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d" satisfied condition "Succeeded or Failed" Jun 11 11:09:14.627: INFO: Trying to get logs from node kali-worker pod client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d container test-container: STEP: delete the pod Jun 11 11:09:14.886: INFO: Waiting for pod client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d to disappear Jun 11 11:09:14.920: INFO: Pod client-containers-0756c506-264d-4d7c-bede-6b4e4cdf039d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:09:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9873" for this suite. • [SLOW TEST:12.334 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:09:14.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9678 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 11 11:09:15.578: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 11 11:09:16.027: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 11 11:09:18.032: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 11 11:09:20.075: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:22.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:24.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:26.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:28.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:30.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:32.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 11 11:09:34.032: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 11 11:09:34.040: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 11 11:09:40.138: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.210 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 11 11:09:40.138: INFO: >>> kubeConfig: /root/.kube/config I0611 11:09:40.168207 7 log.go:172] (0xc001f000b0) (0xc002324460) Create stream I0611 11:09:40.168246 7 log.go:172] (0xc001f000b0) (0xc002324460) Stream added, broadcasting: 1 I0611 11:09:40.170664 7 log.go:172] (0xc001f000b0) Reply frame received for 1 I0611 11:09:40.170703 7 log.go:172] (0xc001f000b0) (0xc00124a000) Create stream I0611 11:09:40.170718 7 log.go:172] (0xc001f000b0) (0xc00124a000) Stream added, broadcasting: 3 I0611 11:09:40.171667 7 log.go:172] (0xc001f000b0) Reply frame received for 3 I0611 11:09:40.171694 7 log.go:172] (0xc001f000b0) (0xc00124a0a0) Create stream I0611 11:09:40.171708 7 log.go:172] (0xc001f000b0) (0xc00124a0a0) Stream added, broadcasting: 5 I0611 11:09:40.172642 7 log.go:172] (0xc001f000b0) Reply frame received for 5 I0611 11:09:41.306741 7 log.go:172] (0xc001f000b0) Data frame received for 3 I0611 11:09:41.306851 7 log.go:172] (0xc00124a000) (3) Data frame handling I0611 11:09:41.306900 7 log.go:172] (0xc00124a000) (3) Data frame sent I0611 11:09:41.307201 7 log.go:172] (0xc001f000b0) Data frame received for 3 I0611 11:09:41.307240 7 log.go:172] (0xc00124a000) (3) Data frame handling I0611 11:09:41.307704 7 log.go:172] (0xc001f000b0) Data frame received for 5 I0611 11:09:41.307743 7 log.go:172] (0xc00124a0a0) (5) Data frame handling I0611 11:09:41.310383 7 log.go:172] (0xc001f000b0) Data frame received for 1 I0611 11:09:41.310416 7 log.go:172] (0xc002324460) (1) Data frame handling I0611 11:09:41.310648 7 log.go:172] (0xc002324460) (1) Data frame sent I0611 11:09:41.310686 7 log.go:172] (0xc001f000b0) (0xc002324460) Stream removed, broadcasting: 1 I0611 11:09:41.310724 7 log.go:172] (0xc001f000b0) Go away received I0611 11:09:41.311015 7 log.go:172] (0xc001f000b0) (0xc002324460) Stream removed, broadcasting: 1 I0611 11:09:41.311044 7 log.go:172] (0xc001f000b0) (0xc00124a000) Stream removed, broadcasting: 3 I0611 11:09:41.311055 7 log.go:172] (0xc001f000b0) (0xc00124a0a0) Stream removed, broadcasting: 5 Jun 11 11:09:41.311: INFO: Found all expected endpoints: [netserver-0] Jun 11 11:09:41.315: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.150 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 11 11:09:41.315: INFO: >>> kubeConfig: /root/.kube/config I0611 11:09:41.350820 7 log.go:172] (0xc00251a370) (0xc0029506e0) Create stream I0611 11:09:41.350847 7 log.go:172] (0xc00251a370) (0xc0029506e0) Stream added, broadcasting: 1 I0611 11:09:41.353081 7 log.go:172] (0xc00251a370) Reply frame received for 1 I0611 11:09:41.353106 7 log.go:172] (0xc00251a370) (0xc002324500) Create stream I0611 11:09:41.353237 7 log.go:172] (0xc00251a370) (0xc002324500) Stream added, broadcasting: 3 I0611 11:09:41.354267 7 log.go:172] (0xc00251a370) Reply frame received for 3 I0611 11:09:41.354306 7 log.go:172] (0xc00251a370) (0xc00124a140) Create stream I0611 11:09:41.354322 7 log.go:172] (0xc00251a370) (0xc00124a140) Stream added, broadcasting: 5 I0611 11:09:41.355444 7 log.go:172] (0xc00251a370) Reply frame received for 5 I0611 11:09:42.436242 7 log.go:172] (0xc00251a370) Data frame received for 5 I0611 11:09:42.436308 7 log.go:172] (0xc00124a140) (5) Data frame handling I0611 11:09:42.436358 7 log.go:172] (0xc00251a370) Data frame received for 3 I0611 11:09:42.436472 7 log.go:172] (0xc002324500) (3) Data frame handling I0611 11:09:42.436515 7 log.go:172] (0xc002324500) (3) Data frame sent I0611 11:09:42.436542 7 log.go:172] (0xc00251a370) Data frame received for 3 I0611 11:09:42.436563 7 log.go:172] (0xc002324500) (3) Data frame handling I0611 11:09:42.439168 7 log.go:172] (0xc00251a370) Data frame received for 1 I0611 11:09:42.439191 7 log.go:172] (0xc0029506e0) (1) Data frame handling I0611 11:09:42.439206 7 log.go:172] (0xc0029506e0) (1) Data frame sent I0611 11:09:42.439227 7 log.go:172] (0xc00251a370) (0xc0029506e0) Stream removed, broadcasting: 1 I0611 11:09:42.439346 7 log.go:172] (0xc00251a370) (0xc0029506e0) Stream removed, broadcasting: 1 I0611 11:09:42.439437 7 log.go:172] (0xc00251a370) Go away received I0611 11:09:42.439528 7 log.go:172] (0xc00251a370) (0xc002324500) Stream removed, broadcasting: 3 I0611 11:09:42.439600 7 log.go:172] (0xc00251a370) (0xc00124a140) Stream removed, broadcasting: 5 Jun 11 11:09:42.439: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:09:42.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9678" for this suite. • [SLOW TEST:27.518 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:09:42.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:09:42.539: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7ed5b609-078c-4d25-9066-3928dc0073b7" in namespace "security-context-test-2396" to be "Succeeded or Failed" Jun 11 11:09:42.552: INFO: Pod "alpine-nnp-false-7ed5b609-078c-4d25-9066-3928dc0073b7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.098097ms Jun 11 11:09:44.568: INFO: Pod "alpine-nnp-false-7ed5b609-078c-4d25-9066-3928dc0073b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029687945s Jun 11 11:09:46.614: INFO: Pod "alpine-nnp-false-7ed5b609-078c-4d25-9066-3928dc0073b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075521734s Jun 11 11:09:46.614: INFO: Pod "alpine-nnp-false-7ed5b609-078c-4d25-9066-3928dc0073b7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:09:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2396" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:09:46.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jun 11 11:09:46.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c" in namespace "projected-3750" to be "Succeeded or Failed" Jun 11 11:09:47.011: INFO: Pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.341545ms Jun 11 11:09:49.018: INFO: Pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029564788s Jun 11 11:09:51.021: INFO: Pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.033017967s Jun 11 11:09:53.025: INFO: Pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036697629s STEP: Saw pod success Jun 11 11:09:53.025: INFO: Pod "downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c" satisfied condition "Succeeded or Failed" Jun 11 11:09:53.027: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c container client-container: STEP: delete the pod Jun 11 11:09:53.074: INFO: Waiting for pod downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c to disappear Jun 11 11:09:53.147: INFO: Pod downwardapi-volume-1d0917e6-9981-4898-b1bd-5dd06e3e7e7c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:09:53.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3750" for this suite. • [SLOW TEST:6.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:09:53.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 11 11:10:01.341: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1951 PodName:pod-sharedvolume-b9215514-2974-4722-bd21-784e86666d98 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 11 11:10:01.341: INFO: >>> kubeConfig: /root/.kube/config I0611 11:10:01.382720 7 log.go:172] (0xc002a93810) (0xc00148d680) Create stream I0611 11:10:01.382750 7 log.go:172] (0xc002a93810) (0xc00148d680) Stream added, broadcasting: 1 I0611 11:10:01.385074 7 log.go:172] (0xc002a93810) Reply frame received for 1 I0611 11:10:01.385303 7 log.go:172] (0xc002a93810) (0xc00124a460) Create stream I0611 11:10:01.385331 7 log.go:172] (0xc002a93810) (0xc00124a460) Stream added, broadcasting: 3 I0611 11:10:01.386246 7 log.go:172] (0xc002a93810) Reply frame received for 3 I0611 11:10:01.386279 7 log.go:172] (0xc002a93810) (0xc002ace140) Create stream I0611 11:10:01.386292 7 log.go:172] (0xc002a93810) (0xc002ace140) Stream added, broadcasting: 5 I0611 11:10:01.387213 7 log.go:172] (0xc002a93810) Reply frame received for 5 I0611 11:10:01.469818 7 log.go:172] (0xc002a93810) Data frame received for 5 I0611 11:10:01.469850 7 log.go:172] (0xc002ace140) (5) Data frame handling I0611 11:10:01.469878 7 log.go:172] (0xc002a93810) Data frame received for 3 I0611 11:10:01.469907 7 log.go:172] (0xc00124a460) (3) Data frame handling I0611 11:10:01.469932 7 log.go:172] (0xc00124a460) (3) Data frame sent I0611 11:10:01.469948 7 log.go:172] (0xc002a93810) Data frame received for 3 I0611 11:10:01.469959 7 log.go:172] (0xc00124a460) (3) Data frame handling I0611 11:10:01.471806 7 log.go:172] (0xc002a93810) Data frame received for 1 I0611 11:10:01.471836 7 log.go:172] (0xc00148d680) (1) Data frame handling I0611 11:10:01.471858 7 log.go:172] (0xc00148d680) (1) Data frame sent I0611 11:10:01.471886 7 log.go:172] (0xc002a93810) (0xc00148d680) Stream removed, broadcasting: 1 I0611 11:10:01.471903 7 log.go:172] (0xc002a93810) Go away received I0611 11:10:01.472035 7 log.go:172] (0xc002a93810) (0xc00148d680) Stream removed, broadcasting: 1 I0611 11:10:01.472090 7 log.go:172] (0xc002a93810) (0xc00124a460) Stream removed, broadcasting: 3 I0611 11:10:01.472119 7 log.go:172] (0xc002a93810) (0xc002ace140) Stream removed, broadcasting: 5 Jun 11 11:10:01.472: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:10:01.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1951" for this suite. • [SLOW TEST:8.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":49,"skipped":864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:10:01.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2241 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2241 Jun 11 11:10:01.615: INFO: Found 0 stateful pods, waiting for 1 Jun 11 11:10:11.622: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jun 11 11:10:11.674: INFO: Deleting all statefulset in ns statefulset-2241 Jun 11 11:10:11.722: INFO: Scaling statefulset ss to 0 Jun 11 11:10:31.800: INFO: Waiting for statefulset status.replicas updated to 0 Jun 11 11:10:31.803: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:10:31.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2241" for this suite. • [SLOW TEST:30.351 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":50,"skipped":907,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:10:31.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-6057 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6057 to expose endpoints map[] Jun 11 11:10:32.978: INFO: successfully validated that service multi-endpoint-test in namespace services-6057 exposes endpoints map[] (47.031949ms elapsed) STEP: Creating pod pod1 in namespace services-6057 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6057 to expose endpoints map[pod1:[100]] Jun 11 11:10:37.154: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.16622228s elapsed, will retry) Jun 11 11:10:38.162: INFO: successfully validated that service multi-endpoint-test in namespace services-6057 exposes endpoints map[pod1:[100]] (5.174111843s elapsed) STEP: Creating pod pod2 in namespace services-6057 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6057 to expose endpoints map[pod1:[100] pod2:[101]] Jun 11 11:10:42.514: INFO: successfully validated that service multi-endpoint-test in namespace services-6057 exposes endpoints map[pod1:[100] pod2:[101]] (4.346272779s elapsed) STEP: Deleting pod pod1 in namespace services-6057 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6057 to expose endpoints map[pod2:[101]] Jun 11 11:10:43.576: INFO: successfully validated that service multi-endpoint-test in namespace services-6057 exposes endpoints map[pod2:[101]] (1.057286655s elapsed) STEP: Deleting pod pod2 in namespace services-6057 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6057 to expose endpoints map[] Jun 11 11:10:43.699: INFO: successfully validated that service multi-endpoint-test in namespace services-6057 exposes endpoints map[] (59.742536ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:10:43.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6057" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.094 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":51,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:10:43.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-70/configmap-test-de24ed6d-fb85-4de7-851d-4abb440b3f88 STEP: Creating a pod to test consume configMaps Jun 11 11:10:44.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a" in namespace "configmap-70" to be "Succeeded or Failed" Jun 11 11:10:44.382: INFO: Pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.706163ms Jun 11 11:10:46.386: INFO: Pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013759403s Jun 11 11:10:48.390: INFO: Pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a": Phase="Running", Reason="", readiness=true. Elapsed: 4.018251799s Jun 11 11:10:50.394: INFO: Pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022583449s STEP: Saw pod success Jun 11 11:10:50.395: INFO: Pod "pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a" satisfied condition "Succeeded or Failed" Jun 11 11:10:50.398: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a container env-test: STEP: delete the pod Jun 11 11:10:50.445: INFO: Waiting for pod pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a to disappear Jun 11 11:10:50.448: INFO: Pod pod-configmaps-c810262b-4a7e-4ebc-bc24-92fa7b10cd0a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:10:50.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-70" for this suite. • [SLOW TEST:6.528 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":939,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:10:50.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 11 11:10:51.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 11 11:10:53.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470651, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470651, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470651, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727470651, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 11 11:10:56.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:11:08.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2497" for this suite. STEP: Destroying namespace "webhook-2497-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.090 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":53,"skipped":955,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:11:08.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Jun 11 11:11:09.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Jun 11 11:11:09.827: INFO: stderr: "" Jun 11 11:11:09.827: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 11 11:11:09.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5728' Jun 11 11:11:10.453: INFO: stderr: "" Jun 11 11:11:10.453: INFO: stdout: "update-demo-nautilus-27wgn update-demo-nautilus-gf2lq " Jun 11 11:11:10.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27wgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5728' Jun 11 11:11:10.682: INFO: stderr: "" Jun 11 11:11:10.683: INFO: stdout: "" Jun 11 11:11:10.683: INFO: update-demo-nautilus-27wgn is created but not running Jun 11 11:11:15.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5728' Jun 11 11:11:15.925: INFO: stderr: "" Jun 11 11:11:15.925: INFO: stdout: "update-demo-nautilus-27wgn update-demo-nautilus-gf2lq " Jun 11 11:11:15.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27wgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5728' Jun 11 11:11:16.074: INFO: stderr: "" Jun 11 11:11:16.074: INFO: stdout: "true" Jun 11 11:11:16.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27wgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5728' Jun 11 11:11:16.180: INFO: stderr: "" Jun 11 11:11:16.180: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 11 11:11:16.180: INFO: validating pod update-demo-nautilus-27wgn Jun 11 11:11:16.279: INFO: got data: { "image": "nautilus.jpg" } Jun 11 11:11:16.279: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 11 11:11:16.279: INFO: update-demo-nautilus-27wgn is verified up and running Jun 11 11:11:16.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gf2lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5728' Jun 11 11:11:16.392: INFO: stderr: "" Jun 11 11:11:16.392: INFO: stdout: "true" Jun 11 11:11:16.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gf2lq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5728' Jun 11 11:11:16.471: INFO: stderr: "" Jun 11 11:11:16.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 11 11:11:16.471: INFO: validating pod update-demo-nautilus-gf2lq Jun 11 11:11:16.488: INFO: got data: { "image": "nautilus.jpg" } Jun 11 11:11:16.488: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 11 11:11:16.488: INFO: update-demo-nautilus-gf2lq is verified up and running STEP: using delete to clean up resources Jun 11 11:11:16.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Jun 11 11:11:17.504: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 11 11:11:17.504: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 11 11:11:17.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5728' Jun 11 11:11:19.245: INFO: stderr: "No resources found in kubectl-5728 namespace.\n" Jun 11 11:11:19.245: INFO: stdout: "" Jun 11 11:11:19.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5728 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 11 11:11:19.520: INFO: stderr: "" Jun 11 11:11:19.520: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:11:19.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5728" for this suite. • [SLOW TEST:10.982 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":54,"skipped":962,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:11:19.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b7a6bfd8-051a-4520-a0d4-4a8eda7af0b9 STEP: Creating a pod to test consume configMaps Jun 11 11:11:22.399: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c" in namespace "projected-8945" to be "Succeeded or Failed" Jun 11 11:11:22.847: INFO: Pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c": Phase="Pending", Reason="", readiness=false. Elapsed: 447.452807ms Jun 11 11:11:25.086: INFO: Pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686943153s Jun 11 11:11:27.115: INFO: Pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715554051s Jun 11 11:11:29.119: INFO: Pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.719509746s STEP: Saw pod success Jun 11 11:11:29.119: INFO: Pod "pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c" satisfied condition "Succeeded or Failed" Jun 11 11:11:29.122: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c container projected-configmap-volume-test: STEP: delete the pod Jun 11 11:11:29.144: INFO: Waiting for pod pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c to disappear Jun 11 11:11:29.185: INFO: Pod pod-projected-configmaps-b78cf40f-fc33-401b-846d-74a7fd06716c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:11:29.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8945" for this suite. • [SLOW TEST:9.665 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":977,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:11:29.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-73b56dc9-38d7-4c2c-8c0a-7fcd15b40897 STEP: Creating a pod to test consume configMaps Jun 11 11:11:29.891: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2" in namespace "projected-5921" to be "Succeeded or Failed" Jun 11 11:11:29.930: INFO: Pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.090862ms Jun 11 11:11:31.972: INFO: Pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081942937s Jun 11 11:11:34.044: INFO: Pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153482667s Jun 11 11:11:36.048: INFO: Pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157223633s STEP: Saw pod success Jun 11 11:11:36.048: INFO: Pod "pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2" satisfied condition "Succeeded or Failed" Jun 11 11:11:36.050: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2 container projected-configmap-volume-test: STEP: delete the pod Jun 11 11:11:36.128: INFO: Waiting for pod pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2 to disappear Jun 11 11:11:36.131: INFO: Pod pod-projected-configmaps-27290f7a-0b67-4479-8544-1765b60fdfa2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:11:36.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5921" for this suite. • [SLOW TEST:6.944 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":977,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:11:36.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:11:36.282: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 11 11:11:36.355: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:36.643: INFO: Number of nodes with available pods: 0 Jun 11 11:11:36.643: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:37.649: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:37.652: INFO: Number of nodes with available pods: 0 Jun 11 11:11:37.652: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:38.701: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:38.705: INFO: Number of nodes with available pods: 0 Jun 11 11:11:38.705: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:39.824: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:39.893: INFO: Number of nodes with available pods: 0 Jun 11 11:11:39.893: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:40.650: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:40.654: INFO: Number of nodes with available pods: 0 Jun 11 11:11:40.654: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:41.649: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:41.653: INFO: Number of nodes with available pods: 2 Jun 11 11:11:41.653: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 11 11:11:41.738: INFO: Wrong image for pod: daemon-set-gzcmf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:41.738: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:41.759: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:42.764: INFO: Wrong image for pod: daemon-set-gzcmf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:42.764: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:42.769: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:43.763: INFO: Wrong image for pod: daemon-set-gzcmf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:43.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:43.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:44.764: INFO: Wrong image for pod: daemon-set-gzcmf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:44.764: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:44.769: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:45.764: INFO: Wrong image for pod: daemon-set-gzcmf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:45.764: INFO: Pod daemon-set-gzcmf is not available Jun 11 11:11:45.764: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:45.769: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:46.763: INFO: Pod daemon-set-l7f6c is not available Jun 11 11:11:46.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:46.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:47.763: INFO: Pod daemon-set-l7f6c is not available Jun 11 11:11:47.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:47.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:48.763: INFO: Pod daemon-set-l7f6c is not available Jun 11 11:11:48.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:48.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:49.792: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:49.796: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:50.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:50.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:51.763: INFO: Wrong image for pod: daemon-set-nlv4m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Jun 11 11:11:51.763: INFO: Pod daemon-set-nlv4m is not available Jun 11 11:11:51.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:52.763: INFO: Pod daemon-set-xs899 is not available Jun 11 11:11:52.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 11 11:11:52.770: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:52.772: INFO: Number of nodes with available pods: 1 Jun 11 11:11:52.772: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:53.780: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:53.816: INFO: Number of nodes with available pods: 1 Jun 11 11:11:53.816: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:54.778: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:54.782: INFO: Number of nodes with available pods: 1 Jun 11 11:11:54.782: INFO: Node kali-worker is running more than one daemon pod Jun 11 11:11:55.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 11 11:11:55.780: INFO: Number of nodes with available pods: 2 Jun 11 11:11:55.780: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6498, will wait for the garbage collector to delete the pods Jun 11 11:11:55.861: INFO: Deleting DaemonSet.extensions daemon-set took: 6.870752ms Jun 11 11:11:56.262: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.390425ms Jun 11 11:12:03.466: INFO: Number of nodes with available pods: 0 Jun 11 11:12:03.466: INFO: Number of running nodes: 0, number of available pods: 0 Jun 11 11:12:03.468: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6498/daemonsets","resourceVersion":"11509591"},"items":null} Jun 11 11:12:03.471: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6498/pods","resourceVersion":"11509591"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:12:03.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6498" for this suite. • [SLOW TEST:27.352 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":57,"skipped":982,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:12:03.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-451a26be-c52d-45c9-8d34-546cbe61e1f9 in namespace container-probe-8646 Jun 11 11:12:07.630: INFO: Started pod busybox-451a26be-c52d-45c9-8d34-546cbe61e1f9 in namespace container-probe-8646 STEP: checking the pod's current state and verifying that restartCount is present Jun 11 11:12:07.633: INFO: Initial restart count of pod busybox-451a26be-c52d-45c9-8d34-546cbe61e1f9 is 0 Jun 11 11:13:01.753: INFO: Restart count of pod container-probe-8646/busybox-451a26be-c52d-45c9-8d34-546cbe61e1f9 is now 1 (54.120191277s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:13:01.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8646" for this suite. • [SLOW TEST:58.369 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":982,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:13:01.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 11 11:13:02.027: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jun 11 11:13:23.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5015" for this suite. • [SLOW TEST:21.928 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":990,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jun 11 11:13:23.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jun 11 11:13:23.861: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jun 11 11:13:30.073: INFO: &Pod{ObjectMeta:{send-events-fbd265af-64d5-42b5-b989-2b56e28b6423  events-3360 /api/v1/namespaces/events-3360/pods/send-events-fbd265af-64d5-42b5-b989-2b56e28b6423 4c7f6a75-d69c-4fd8-9c27-eacd0e64475a 11509931 0 2020-06-11 11:13:24 +0000 UTC   map[name:foo time:39517931] map[] [] []  [{e2e.test Update v1 2020-06-11 11:13:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:13:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72kk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72kk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:13:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:13:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:13:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:13:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.163,StartTime:2020-06-11 11:13:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:13:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://8930420e3d48ee2c425355ceaed25620d58d5cc9e4aa5d90c8d846e8ff4b8b05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jun 11 11:13:32.087: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jun 11 11:13:34.092: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:13:34.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3360" for this suite.

• [SLOW TEST:10.171 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":61,"skipped":1011,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:13:34.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6316
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6316
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6316
Jun 11 11:13:34.273: INFO: Found 0 stateful pods, waiting for 1
Jun 11 11:13:44.278: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jun 11 11:13:44.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 11:13:44.516: INFO: stderr: "I0611 11:13:44.393554    1328 log.go:172] (0xc000960000) (0xc000974000) Create stream\nI0611 11:13:44.393615    1328 log.go:172] (0xc000960000) (0xc000974000) Stream added, broadcasting: 1\nI0611 11:13:44.395686    1328 log.go:172] (0xc000960000) Reply frame received for 1\nI0611 11:13:44.395724    1328 log.go:172] (0xc000960000) (0xc0009740a0) Create stream\nI0611 11:13:44.395735    1328 log.go:172] (0xc000960000) (0xc0009740a0) Stream added, broadcasting: 3\nI0611 11:13:44.396417    1328 log.go:172] (0xc000960000) Reply frame received for 3\nI0611 11:13:44.396438    1328 log.go:172] (0xc000960000) (0xc000974140) Create stream\nI0611 11:13:44.396444    1328 log.go:172] (0xc000960000) (0xc000974140) Stream added, broadcasting: 5\nI0611 11:13:44.397321    1328 log.go:172] (0xc000960000) Reply frame received for 5\nI0611 11:13:44.476772    1328 log.go:172] (0xc000960000) Data frame received for 5\nI0611 11:13:44.476797    1328 log.go:172] (0xc000974140) (5) Data frame handling\nI0611 11:13:44.476812    1328 log.go:172] (0xc000974140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 11:13:44.506947    1328 log.go:172] (0xc000960000) Data frame received for 3\nI0611 11:13:44.506990    1328 log.go:172] (0xc0009740a0) (3) Data frame handling\nI0611 11:13:44.507027    1328 log.go:172] (0xc0009740a0) (3) Data frame sent\nI0611 11:13:44.507047    1328 log.go:172] (0xc000960000) Data frame received for 3\nI0611 11:13:44.507068    1328 log.go:172] (0xc0009740a0) (3) Data frame handling\nI0611 11:13:44.507253    1328 log.go:172] (0xc000960000) Data frame received for 5\nI0611 11:13:44.507285    1328 log.go:172] (0xc000974140) (5) Data frame handling\nI0611 11:13:44.509292    1328 log.go:172] (0xc000960000) Data frame received for 1\nI0611 11:13:44.509310    1328 log.go:172] (0xc000974000) (1) Data frame handling\nI0611 11:13:44.509323    1328 log.go:172] (0xc000974000) (1) Data frame sent\nI0611 11:13:44.509468    1328 log.go:172] (0xc000960000) (0xc000974000) Stream removed, broadcasting: 1\nI0611 11:13:44.509509    1328 log.go:172] (0xc000960000) Go away received\nI0611 11:13:44.509740    1328 log.go:172] (0xc000960000) (0xc000974000) Stream removed, broadcasting: 1\nI0611 11:13:44.509755    1328 log.go:172] (0xc000960000) (0xc0009740a0) Stream removed, broadcasting: 3\nI0611 11:13:44.509763    1328 log.go:172] (0xc000960000) (0xc000974140) Stream removed, broadcasting: 5\n"
Jun 11 11:13:44.516: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 11:13:44.516: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jun 11 11:13:44.520: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 11 11:13:54.524: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 11 11:13:54.524: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 11:13:54.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999566s
Jun 11 11:13:55.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.904599872s
Jun 11 11:13:56.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.89946118s
Jun 11 11:13:57.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.844545746s
Jun 11 11:13:58.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.841094808s
Jun 11 11:13:59.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.836436701s
Jun 11 11:14:00.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.802648074s
Jun 11 11:14:01.755: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.798079919s
Jun 11 11:14:02.760: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.778954073s
Jun 11 11:14:03.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 774.628341ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6316
Jun 11 11:14:04.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 11:14:04.999: INFO: stderr: "I0611 11:14:04.920153    1347 log.go:172] (0xc0003ea000) (0xc000a88000) Create stream\nI0611 11:14:04.920213    1347 log.go:172] (0xc0003ea000) (0xc000a88000) Stream added, broadcasting: 1\nI0611 11:14:04.922673    1347 log.go:172] (0xc0003ea000) Reply frame received for 1\nI0611 11:14:04.922714    1347 log.go:172] (0xc0003ea000) (0xc00072f680) Create stream\nI0611 11:14:04.922729    1347 log.go:172] (0xc0003ea000) (0xc00072f680) Stream added, broadcasting: 3\nI0611 11:14:04.923772    1347 log.go:172] (0xc0003ea000) Reply frame received for 3\nI0611 11:14:04.923803    1347 log.go:172] (0xc0003ea000) (0xc00052ec80) Create stream\nI0611 11:14:04.923812    1347 log.go:172] (0xc0003ea000) (0xc00052ec80) Stream added, broadcasting: 5\nI0611 11:14:04.924674    1347 log.go:172] (0xc0003ea000) Reply frame received for 5\nI0611 11:14:04.992284    1347 log.go:172] (0xc0003ea000) Data frame received for 3\nI0611 11:14:04.992315    1347 log.go:172] (0xc00072f680) (3) Data frame handling\nI0611 11:14:04.992326    1347 log.go:172] (0xc00072f680) (3) Data frame sent\nI0611 11:14:04.992335    1347 log.go:172] (0xc0003ea000) Data frame received for 3\nI0611 11:14:04.992341    1347 log.go:172] (0xc00072f680) (3) Data frame handling\nI0611 11:14:04.992381    1347 log.go:172] (0xc0003ea000) Data frame received for 5\nI0611 11:14:04.992423    1347 log.go:172] (0xc00052ec80) (5) Data frame handling\nI0611 11:14:04.992455    1347 log.go:172] (0xc00052ec80) (5) Data frame sent\nI0611 11:14:04.992475    1347 log.go:172] (0xc0003ea000) Data frame received for 5\nI0611 11:14:04.992512    1347 log.go:172] (0xc00052ec80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 11:14:04.994212    1347 log.go:172] (0xc0003ea000) Data frame received for 1\nI0611 11:14:04.994245    1347 log.go:172] (0xc000a88000) (1) Data frame handling\nI0611 11:14:04.994264    1347 log.go:172] (0xc000a88000) (1) Data frame sent\nI0611 11:14:04.994284    1347 log.go:172] (0xc0003ea000) (0xc000a88000) Stream removed, broadcasting: 1\nI0611 11:14:04.994323    1347 log.go:172] (0xc0003ea000) Go away received\nI0611 11:14:04.994636    1347 log.go:172] (0xc0003ea000) (0xc000a88000) Stream removed, broadcasting: 1\nI0611 11:14:04.994654    1347 log.go:172] (0xc0003ea000) (0xc00072f680) Stream removed, broadcasting: 3\nI0611 11:14:04.994662    1347 log.go:172] (0xc0003ea000) (0xc00052ec80) Stream removed, broadcasting: 5\n"
Jun 11 11:14:04.999: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 11:14:04.999: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 11:14:05.003: INFO: Found 1 stateful pods, waiting for 3
Jun 11 11:14:15.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:14:15.008: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:14:15.008: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jun 11 11:14:15.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 11:14:15.226: INFO: stderr: "I0611 11:14:15.149856    1368 log.go:172] (0xc000ab60b0) (0xc000452be0) Create stream\nI0611 11:14:15.149935    1368 log.go:172] (0xc000ab60b0) (0xc000452be0) Stream added, broadcasting: 1\nI0611 11:14:15.153425    1368 log.go:172] (0xc000ab60b0) Reply frame received for 1\nI0611 11:14:15.153594    1368 log.go:172] (0xc000ab60b0) (0xc000aaa000) Create stream\nI0611 11:14:15.153611    1368 log.go:172] (0xc000ab60b0) (0xc000aaa000) Stream added, broadcasting: 3\nI0611 11:14:15.154862    1368 log.go:172] (0xc000ab60b0) Reply frame received for 3\nI0611 11:14:15.154916    1368 log.go:172] (0xc000ab60b0) (0xc000428000) Create stream\nI0611 11:14:15.154930    1368 log.go:172] (0xc000ab60b0) (0xc000428000) Stream added, broadcasting: 5\nI0611 11:14:15.156160    1368 log.go:172] (0xc000ab60b0) Reply frame received for 5\nI0611 11:14:15.217831    1368 log.go:172] (0xc000ab60b0) Data frame received for 3\nI0611 11:14:15.217880    1368 log.go:172] (0xc000aaa000) (3) Data frame handling\nI0611 11:14:15.217898    1368 log.go:172] (0xc000aaa000) (3) Data frame sent\nI0611 11:14:15.217914    1368 log.go:172] (0xc000ab60b0) Data frame received for 3\nI0611 11:14:15.217921    1368 log.go:172] (0xc000aaa000) (3) Data frame handling\nI0611 11:14:15.217964    1368 log.go:172] (0xc000ab60b0) Data frame received for 5\nI0611 11:14:15.217988    1368 log.go:172] (0xc000428000) (5) Data frame handling\nI0611 11:14:15.218003    1368 log.go:172] (0xc000428000) (5) Data frame sent\nI0611 11:14:15.218015    1368 log.go:172] (0xc000ab60b0) Data frame received for 5\nI0611 11:14:15.218026    1368 log.go:172] (0xc000428000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 11:14:15.219484    1368 log.go:172] (0xc000ab60b0) Data frame received for 1\nI0611 11:14:15.219500    1368 log.go:172] (0xc000452be0) (1) Data frame handling\nI0611 11:14:15.219512    1368 log.go:172] (0xc000452be0) (1) Data frame sent\nI0611 11:14:15.219536    1368 log.go:172] (0xc000ab60b0) (0xc000452be0) Stream removed, broadcasting: 1\nI0611 11:14:15.219774    1368 log.go:172] (0xc000ab60b0) (0xc000452be0) Stream removed, broadcasting: 1\nI0611 11:14:15.219792    1368 log.go:172] (0xc000ab60b0) (0xc000aaa000) Stream removed, broadcasting: 3\nI0611 11:14:15.219853    1368 log.go:172] (0xc000ab60b0) Go away received\nI0611 11:14:15.219953    1368 log.go:172] (0xc000ab60b0) (0xc000428000) Stream removed, broadcasting: 5\n"
Jun 11 11:14:15.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 11:14:15.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jun 11 11:14:15.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 11:14:15.485: INFO: stderr: "I0611 11:14:15.366104    1389 log.go:172] (0xc0009d88f0) (0xc000c20140) Create stream\nI0611 11:14:15.366168    1389 log.go:172] (0xc0009d88f0) (0xc000c20140) Stream added, broadcasting: 1\nI0611 11:14:15.368563    1389 log.go:172] (0xc0009d88f0) Reply frame received for 1\nI0611 11:14:15.368601    1389 log.go:172] (0xc0009d88f0) (0xc0007f1180) Create stream\nI0611 11:14:15.368612    1389 log.go:172] (0xc0009d88f0) (0xc0007f1180) Stream added, broadcasting: 3\nI0611 11:14:15.370152    1389 log.go:172] (0xc0009d88f0) Reply frame received for 3\nI0611 11:14:15.370193    1389 log.go:172] (0xc0009d88f0) (0xc000c201e0) Create stream\nI0611 11:14:15.370205    1389 log.go:172] (0xc0009d88f0) (0xc000c201e0) Stream added, broadcasting: 5\nI0611 11:14:15.371034    1389 log.go:172] (0xc0009d88f0) Reply frame received for 5\nI0611 11:14:15.448407    1389 log.go:172] (0xc0009d88f0) Data frame received for 5\nI0611 11:14:15.448448    1389 log.go:172] (0xc000c201e0) (5) Data frame handling\nI0611 11:14:15.448470    1389 log.go:172] (0xc000c201e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 11:14:15.476964    1389 log.go:172] (0xc0009d88f0) Data frame received for 5\nI0611 11:14:15.477000    1389 log.go:172] (0xc0009d88f0) Data frame received for 3\nI0611 11:14:15.477022    1389 log.go:172] (0xc0007f1180) (3) Data frame handling\nI0611 11:14:15.477035    1389 log.go:172] (0xc0007f1180) (3) Data frame sent\nI0611 11:14:15.477043    1389 log.go:172] (0xc0009d88f0) Data frame received for 3\nI0611 11:14:15.477053    1389 log.go:172] (0xc0007f1180) (3) Data frame handling\nI0611 11:14:15.477069    1389 log.go:172] (0xc000c201e0) (5) Data frame handling\nI0611 11:14:15.478661    1389 log.go:172] (0xc0009d88f0) Data frame received for 1\nI0611 11:14:15.478698    1389 log.go:172] (0xc000c20140) (1) Data frame handling\nI0611 11:14:15.478812    1389 log.go:172] (0xc000c20140) (1) Data frame sent\nI0611 11:14:15.478851    1389 log.go:172] (0xc0009d88f0) (0xc000c20140) Stream removed, broadcasting: 1\nI0611 11:14:15.478874    1389 log.go:172] (0xc0009d88f0) Go away received\nI0611 11:14:15.479303    1389 log.go:172] (0xc0009d88f0) (0xc000c20140) Stream removed, broadcasting: 1\nI0611 11:14:15.479333    1389 log.go:172] (0xc0009d88f0) (0xc0007f1180) Stream removed, broadcasting: 3\nI0611 11:14:15.479345    1389 log.go:172] (0xc0009d88f0) (0xc000c201e0) Stream removed, broadcasting: 5\n"
Jun 11 11:14:15.485: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 11:14:15.485: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jun 11 11:14:15.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 11:14:15.749: INFO: stderr: "I0611 11:14:15.621657    1410 log.go:172] (0xc0009a04d0) (0xc000406b40) Create stream\nI0611 11:14:15.621721    1410 log.go:172] (0xc0009a04d0) (0xc000406b40) Stream added, broadcasting: 1\nI0611 11:14:15.624382    1410 log.go:172] (0xc0009a04d0) Reply frame received for 1\nI0611 11:14:15.624418    1410 log.go:172] (0xc0009a04d0) (0xc0006b72c0) Create stream\nI0611 11:14:15.624425    1410 log.go:172] (0xc0009a04d0) (0xc0006b72c0) Stream added, broadcasting: 3\nI0611 11:14:15.625600    1410 log.go:172] (0xc0009a04d0) Reply frame received for 3\nI0611 11:14:15.625630    1410 log.go:172] (0xc0009a04d0) (0xc00095e000) Create stream\nI0611 11:14:15.625640    1410 log.go:172] (0xc0009a04d0) (0xc00095e000) Stream added, broadcasting: 5\nI0611 11:14:15.626596    1410 log.go:172] (0xc0009a04d0) Reply frame received for 5\nI0611 11:14:15.700166    1410 log.go:172] (0xc0009a04d0) Data frame received for 5\nI0611 11:14:15.700190    1410 log.go:172] (0xc00095e000) (5) Data frame handling\nI0611 11:14:15.700208    1410 log.go:172] (0xc00095e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 11:14:15.740089    1410 log.go:172] (0xc0009a04d0) Data frame received for 3\nI0611 11:14:15.740214    1410 log.go:172] (0xc0006b72c0) (3) Data frame handling\nI0611 11:14:15.740280    1410 log.go:172] (0xc0006b72c0) (3) Data frame sent\nI0611 11:14:15.740386    1410 log.go:172] (0xc0009a04d0) Data frame received for 3\nI0611 11:14:15.740397    1410 log.go:172] (0xc0006b72c0) (3) Data frame handling\nI0611 11:14:15.740423    1410 log.go:172] (0xc0009a04d0) Data frame received for 5\nI0611 11:14:15.740435    1410 log.go:172] (0xc00095e000) (5) Data frame handling\nI0611 11:14:15.742348    1410 log.go:172] (0xc0009a04d0) Data frame received for 1\nI0611 11:14:15.742366    1410 log.go:172] (0xc000406b40) (1) Data frame handling\nI0611 11:14:15.742487    1410 log.go:172] (0xc000406b40) (1) Data frame sent\nI0611 11:14:15.742503    1410 log.go:172] (0xc0009a04d0) (0xc000406b40) Stream removed, broadcasting: 1\nI0611 11:14:15.742513    1410 log.go:172] (0xc0009a04d0) Go away received\nI0611 11:14:15.742849    1410 log.go:172] (0xc0009a04d0) (0xc000406b40) Stream removed, broadcasting: 1\nI0611 11:14:15.742864    1410 log.go:172] (0xc0009a04d0) (0xc0006b72c0) Stream removed, broadcasting: 3\nI0611 11:14:15.742871    1410 log.go:172] (0xc0009a04d0) (0xc00095e000) Stream removed, broadcasting: 5\n"
Jun 11 11:14:15.749: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 11:14:15.749: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jun 11 11:14:15.749: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 11:14:15.752: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jun 11 11:14:25.761: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 11 11:14:25.761: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 11 11:14:25.761: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 11 11:14:25.791: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999126s
Jun 11 11:14:26.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97785496s
Jun 11 11:14:27.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971567219s
Jun 11 11:14:28.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.9664269s
Jun 11 11:14:29.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959522586s
Jun 11 11:14:30.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952090474s
Jun 11 11:14:31.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.946211863s
Jun 11 11:14:32.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941832195s
Jun 11 11:14:33.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.937095098s
Jun 11 11:14:34.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 931.306786ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6316
Jun 11 11:14:35.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 11:14:36.090: INFO: stderr: "I0611 11:14:35.987538    1431 log.go:172] (0xc000b0c0b0) (0xc00069d4a0) Create stream\nI0611 11:14:35.987620    1431 log.go:172] (0xc000b0c0b0) (0xc00069d4a0) Stream added, broadcasting: 1\nI0611 11:14:35.990745    1431 log.go:172] (0xc000b0c0b0) Reply frame received for 1\nI0611 11:14:35.990804    1431 log.go:172] (0xc000b0c0b0) (0xc0009f4000) Create stream\nI0611 11:14:35.990817    1431 log.go:172] (0xc000b0c0b0) (0xc0009f4000) Stream added, broadcasting: 3\nI0611 11:14:35.991714    1431 log.go:172] (0xc000b0c0b0) Reply frame received for 3\nI0611 11:14:35.991747    1431 log.go:172] (0xc000b0c0b0) (0xc00069d540) Create stream\nI0611 11:14:35.991759    1431 log.go:172] (0xc000b0c0b0) (0xc00069d540) Stream added, broadcasting: 5\nI0611 11:14:35.992779    1431 log.go:172] (0xc000b0c0b0) Reply frame received for 5\nI0611 11:14:36.081440    1431 log.go:172] (0xc000b0c0b0) Data frame received for 5\nI0611 11:14:36.081481    1431 log.go:172] (0xc000b0c0b0) Data frame received for 3\nI0611 11:14:36.081507    1431 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0611 11:14:36.081520    1431 log.go:172] (0xc0009f4000) (3) Data frame sent\nI0611 11:14:36.081534    1431 log.go:172] (0xc000b0c0b0) Data frame received for 3\nI0611 11:14:36.081546    1431 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0611 11:14:36.081574    1431 log.go:172] (0xc00069d540) (5) Data frame handling\nI0611 11:14:36.081600    1431 log.go:172] (0xc00069d540) (5) Data frame sent\nI0611 11:14:36.081609    1431 log.go:172] (0xc000b0c0b0) Data frame received for 5\nI0611 11:14:36.081615    1431 log.go:172] (0xc00069d540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 11:14:36.083387    1431 log.go:172] (0xc000b0c0b0) Data frame received for 1\nI0611 11:14:36.083423    1431 log.go:172] (0xc00069d4a0) (1) Data frame handling\nI0611 11:14:36.083439    1431 log.go:172] (0xc00069d4a0) (1) Data frame sent\nI0611 11:14:36.083455    1431 log.go:172] (0xc000b0c0b0) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0611 11:14:36.083841    1431 log.go:172] (0xc000b0c0b0) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0611 11:14:36.083863    1431 log.go:172] (0xc000b0c0b0) (0xc0009f4000) Stream removed, broadcasting: 3\nI0611 11:14:36.083876    1431 log.go:172] (0xc000b0c0b0) (0xc00069d540) Stream removed, broadcasting: 5\n"
Jun 11 11:14:36.090: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 11:14:36.090: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 11:14:36.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 11:14:36.326: INFO: stderr: "I0611 11:14:36.237386    1453 log.go:172] (0xc0009ae000) (0xc0007332c0) Create stream\nI0611 11:14:36.237483    1453 log.go:172] (0xc0009ae000) (0xc0007332c0) Stream added, broadcasting: 1\nI0611 11:14:36.240078    1453 log.go:172] (0xc0009ae000) Reply frame received for 1\nI0611 11:14:36.240125    1453 log.go:172] (0xc0009ae000) (0xc00090c000) Create stream\nI0611 11:14:36.240148    1453 log.go:172] (0xc0009ae000) (0xc00090c000) Stream added, broadcasting: 3\nI0611 11:14:36.241648    1453 log.go:172] (0xc0009ae000) Reply frame received for 3\nI0611 11:14:36.241707    1453 log.go:172] (0xc0009ae000) (0xc000920000) Create stream\nI0611 11:14:36.241728    1453 log.go:172] (0xc0009ae000) (0xc000920000) Stream added, broadcasting: 5\nI0611 11:14:36.242878    1453 log.go:172] (0xc0009ae000) Reply frame received for 5\nI0611 11:14:36.318138    1453 log.go:172] (0xc0009ae000) Data frame received for 5\nI0611 11:14:36.318244    1453 log.go:172] (0xc000920000) (5) Data frame handling\nI0611 11:14:36.318268    1453 log.go:172] (0xc000920000) (5) Data frame sent\nI0611 11:14:36.318280    1453 log.go:172] (0xc0009ae000) Data frame received for 5\nI0611 11:14:36.318289    1453 log.go:172] (0xc000920000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 11:14:36.318307    1453 log.go:172] (0xc0009ae000) Data frame received for 3\nI0611 11:14:36.318313    1453 log.go:172] (0xc00090c000) (3) Data frame handling\nI0611 11:14:36.318325    1453 log.go:172] (0xc00090c000) (3) Data frame sent\nI0611 11:14:36.318331    1453 log.go:172] (0xc0009ae000) Data frame received for 3\nI0611 11:14:36.318336    1453 log.go:172] (0xc00090c000) (3) Data frame handling\nI0611 11:14:36.319628    1453 log.go:172] (0xc0009ae000) Data frame received for 1\nI0611 11:14:36.319648    1453 log.go:172] (0xc0007332c0) (1) Data frame handling\nI0611 11:14:36.319665    1453 log.go:172] (0xc0007332c0) (1) Data frame sent\nI0611 11:14:36.319675    1453 log.go:172] (0xc0009ae000) (0xc0007332c0) Stream removed, broadcasting: 1\nI0611 11:14:36.319888    1453 log.go:172] (0xc0009ae000) (0xc0007332c0) Stream removed, broadcasting: 1\nI0611 11:14:36.319902    1453 log.go:172] (0xc0009ae000) (0xc00090c000) Stream removed, broadcasting: 3\nI0611 11:14:36.320016    1453 log.go:172] (0xc0009ae000) Go away received\nI0611 11:14:36.320039    1453 log.go:172] (0xc0009ae000) (0xc000920000) Stream removed, broadcasting: 5\n"
Jun 11 11:14:36.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 11:14:36.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 11:14:36.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6316 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 11:14:36.514: INFO: stderr: "I0611 11:14:36.457393    1476 log.go:172] (0xc000836000) (0xc0008d0b40) Create stream\nI0611 11:14:36.457458    1476 log.go:172] (0xc000836000) (0xc0008d0b40) Stream added, broadcasting: 1\nI0611 11:14:36.460345    1476 log.go:172] (0xc000836000) Reply frame received for 1\nI0611 11:14:36.460371    1476 log.go:172] (0xc000836000) (0xc0008d0be0) Create stream\nI0611 11:14:36.460385    1476 log.go:172] (0xc000836000) (0xc0008d0be0) Stream added, broadcasting: 3\nI0611 11:14:36.461549    1476 log.go:172] (0xc000836000) Reply frame received for 3\nI0611 11:14:36.461599    1476 log.go:172] (0xc000836000) (0xc0008d0c80) Create stream\nI0611 11:14:36.461623    1476 log.go:172] (0xc000836000) (0xc0008d0c80) Stream added, broadcasting: 5\nI0611 11:14:36.462430    1476 log.go:172] (0xc000836000) Reply frame received for 5\nI0611 11:14:36.507454    1476 log.go:172] (0xc000836000) Data frame received for 3\nI0611 11:14:36.507498    1476 log.go:172] (0xc0008d0be0) (3) Data frame handling\nI0611 11:14:36.507529    1476 log.go:172] (0xc0008d0be0) (3) Data frame sent\nI0611 11:14:36.507559    1476 log.go:172] (0xc000836000) Data frame received for 3\nI0611 11:14:36.507599    1476 log.go:172] (0xc000836000) Data frame received for 5\nI0611 11:14:36.507648    1476 log.go:172] (0xc0008d0c80) (5) Data frame handling\nI0611 11:14:36.507681    1476 log.go:172] (0xc0008d0c80) (5) Data frame sent\nI0611 11:14:36.507698    1476 log.go:172] (0xc000836000) Data frame received for 5\nI0611 11:14:36.507711    1476 log.go:172] (0xc0008d0c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 11:14:36.507740    1476 log.go:172] (0xc0008d0be0) (3) Data frame handling\nI0611 11:14:36.508701    1476 log.go:172] (0xc000836000) Data frame received for 1\nI0611 11:14:36.508714    1476 log.go:172] (0xc0008d0b40) (1) Data frame handling\nI0611 11:14:36.508721    1476 log.go:172] (0xc0008d0b40) (1) Data frame sent\nI0611 11:14:36.508728    1476 log.go:172] (0xc000836000) (0xc0008d0b40) Stream removed, broadcasting: 1\nI0611 11:14:36.508901    1476 log.go:172] (0xc000836000) Go away received\nI0611 11:14:36.509008    1476 log.go:172] (0xc000836000) (0xc0008d0b40) Stream removed, broadcasting: 1\nI0611 11:14:36.509026    1476 log.go:172] (0xc000836000) (0xc0008d0be0) Stream removed, broadcasting: 3\nI0611 11:14:36.509034    1476 log.go:172] (0xc000836000) (0xc0008d0c80) Stream removed, broadcasting: 5\n"
Jun 11 11:14:36.515: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 11:14:36.515: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 11:14:36.515: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jun 11 11:14:56.541: INFO: Deleting all statefulset in ns statefulset-6316
Jun 11 11:14:56.563: INFO: Scaling statefulset ss to 0
Jun 11 11:14:56.572: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 11:14:56.574: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:14:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6316" for this suite.

• [SLOW TEST:82.452 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":62,"skipped":1023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:14:56.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:14:56.645: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406" in namespace "security-context-test-9846" to be "Succeeded or Failed"
Jun 11 11:14:56.661: INFO: Pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406": Phase="Pending", Reason="", readiness=false. Elapsed: 16.335782ms
Jun 11 11:14:58.666: INFO: Pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020864883s
Jun 11 11:15:00.670: INFO: Pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025266732s
Jun 11 11:15:00.670: INFO: Pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406" satisfied condition "Succeeded or Failed"
Jun 11 11:15:00.691: INFO: Got logs for pod "busybox-privileged-false-3572e99a-0ec3-49d6-91ca-f38983344406": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9846" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1050,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:00.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 11 11:15:00.900: INFO: Waiting up to 5m0s for pod "pod-97377928-d157-4d60-a6fd-744a0b22d81c" in namespace "emptydir-3418" to be "Succeeded or Failed"
Jun 11 11:15:00.907: INFO: Pod "pod-97377928-d157-4d60-a6fd-744a0b22d81c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.485283ms
Jun 11 11:15:03.206: INFO: Pod "pod-97377928-d157-4d60-a6fd-744a0b22d81c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305811571s
Jun 11 11:15:05.210: INFO: Pod "pod-97377928-d157-4d60-a6fd-744a0b22d81c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310024285s
STEP: Saw pod success
Jun 11 11:15:05.210: INFO: Pod "pod-97377928-d157-4d60-a6fd-744a0b22d81c" satisfied condition "Succeeded or Failed"
Jun 11 11:15:05.213: INFO: Trying to get logs from node kali-worker pod pod-97377928-d157-4d60-a6fd-744a0b22d81c container test-container: 
STEP: delete the pod
Jun 11 11:15:05.262: INFO: Waiting for pod pod-97377928-d157-4d60-a6fd-744a0b22d81c to disappear
Jun 11 11:15:05.273: INFO: Pod pod-97377928-d157-4d60-a6fd-744a0b22d81c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:05.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3418" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1094,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:05.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Jun 11 11:15:05.410: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:05.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-805" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":65,"skipped":1126,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:05.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0611 11:15:18.400911       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 11:15:18.400: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3835" for this suite.

• [SLOW TEST:13.063 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":66,"skipped":1141,"failed":0}
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:18.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 11 11:15:25.073: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4d74ca1f-f7c5-428a-8105-bc5495529da1"
Jun 11 11:15:25.073: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4d74ca1f-f7c5-428a-8105-bc5495529da1" in namespace "pods-9177" to be "terminated due to deadline exceeded"
Jun 11 11:15:25.130: INFO: Pod "pod-update-activedeadlineseconds-4d74ca1f-f7c5-428a-8105-bc5495529da1": Phase="Running", Reason="", readiness=true. Elapsed: 56.470602ms
Jun 11 11:15:27.134: INFO: Pod "pod-update-activedeadlineseconds-4d74ca1f-f7c5-428a-8105-bc5495529da1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.060947124s
Jun 11 11:15:27.134: INFO: Pod "pod-update-activedeadlineseconds-4d74ca1f-f7c5-428a-8105-bc5495529da1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:27.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9177" for this suite.

• [SLOW TEST:8.295 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1141,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:27.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:15:40.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4802" for this suite.

• [SLOW TEST:13.594 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":68,"skipped":1160,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:15:40.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:15:48.958: INFO: DNS probes using dns-test-6b3a69cc-fe70-40af-bb78-b59c10ec59f0 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:15:59.653: INFO: File wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:15:59.656: INFO: File jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:15:59.656: INFO: Lookups using dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 failed for: [wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local]

Jun 11 11:16:04.724: INFO: File wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:04.727: INFO: File jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:04.727: INFO: Lookups using dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 failed for: [wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local]

Jun 11 11:16:09.661: INFO: File wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:09.665: INFO: File jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:09.665: INFO: Lookups using dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 failed for: [wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local]

Jun 11 11:16:14.661: INFO: File wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:14.664: INFO: File jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local from pod  dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 11 11:16:14.665: INFO: Lookups using dns-8615/dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 failed for: [wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local]

Jun 11 11:16:19.666: INFO: DNS probes using dns-test-5330aabf-b544-4be1-be70-4e503ebfa246 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8615.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8615.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:16:28.148: INFO: DNS probes using dns-test-70b3060a-e9ec-415d-99ef-98e0a84e45d2 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:16:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8615" for this suite.

• [SLOW TEST:47.546 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":69,"skipped":1163,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:16:28.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Jun 11 11:16:28.931: INFO: Waiting up to 5m0s for pod "var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c" in namespace "var-expansion-2478" to be "Succeeded or Failed"
Jun 11 11:16:28.942: INFO: Pod "var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811297ms
Jun 11 11:16:30.975: INFO: Pod "var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043753102s
Jun 11 11:16:32.979: INFO: Pod "var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047867935s
STEP: Saw pod success
Jun 11 11:16:32.979: INFO: Pod "var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c" satisfied condition "Succeeded or Failed"
Jun 11 11:16:32.982: INFO: Trying to get logs from node kali-worker2 pod var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c container dapi-container: 
STEP: delete the pod
Jun 11 11:16:33.025: INFO: Waiting for pod var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c to disappear
Jun 11 11:16:33.330: INFO: Pod var-expansion-73abc0ae-92fc-4ba2-ac92-5692167a8c4c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:16:33.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2478" for this suite.

• [SLOW TEST:5.069 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1183,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:16:33.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:16:33.516: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:16:39.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3879" for this suite.

• [SLOW TEST:6.569 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":71,"skipped":1191,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:16:39.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-05ae6b70-6440-41b0-9f37-3f7a39c11134
STEP: Creating secret with name secret-projected-all-test-volume-6f452a6f-6efa-4886-b93c-3e8609b75033
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 11 11:16:40.056: INFO: Waiting up to 5m0s for pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d" in namespace "projected-3752" to be "Succeeded or Failed"
Jun 11 11:16:40.065: INFO: Pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.363773ms
Jun 11 11:16:42.070: INFO: Pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014214356s
Jun 11 11:16:44.075: INFO: Pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d": Phase="Running", Reason="", readiness=true. Elapsed: 4.018918938s
Jun 11 11:16:46.114: INFO: Pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058461829s
STEP: Saw pod success
Jun 11 11:16:46.114: INFO: Pod "projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d" satisfied condition "Succeeded or Failed"
Jun 11 11:16:46.117: INFO: Trying to get logs from node kali-worker pod projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d container projected-all-volume-test: 
STEP: delete the pod
Jun 11 11:16:46.464: INFO: Waiting for pod projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d to disappear
Jun 11 11:16:46.508: INFO: Pod projected-volume-bdf0a941-96ae-4779-a21e-967107fce00d no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:16:46.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3752" for this suite.

• [SLOW TEST:6.738 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1238,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:16:46.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jun 11 11:16:55.006: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 11 11:16:55.012: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 11 11:16:57.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 11 11:16:57.031: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 11 11:16:59.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 11 11:16:59.017: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:16:59.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-543" for this suite.

• [SLOW TEST:12.365 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1238,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:16:59.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jun 11 11:17:03.674: INFO: Successfully updated pod "annotationupdatec2fd76af-0e9f-4dd7-8400-c68620599cc8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:17:07.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6668" for this suite.

• [SLOW TEST:8.683 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1252,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:17:07.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-b85bd686-86e2-4e44-9df8-22ff9241b2f6
STEP: Creating configMap with name cm-test-opt-upd-c7456430-315a-48d8-bb09-4596e6890891
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b85bd686-86e2-4e44-9df8-22ff9241b2f6
STEP: Updating configmap cm-test-opt-upd-c7456430-315a-48d8-bb09-4596e6890891
STEP: Creating configMap with name cm-test-opt-create-fbc54483-e721-4ec7-9522-01d3ca6cf03b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:18:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6488" for this suite.

• [SLOW TEST:79.153 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1264,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:18:26.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:18:26.972: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jun 11 11:18:31.982: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 11 11:18:31.982: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jun 11 11:18:33.986: INFO: Creating deployment "test-rollover-deployment"
Jun 11 11:18:34.120: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jun 11 11:18:38.127: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jun 11 11:18:38.132: INFO: Ensure that both replica sets have 1 created replica
Jun 11 11:18:38.138: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jun 11 11:18:38.144: INFO: Updating deployment test-rollover-deployment
Jun 11 11:18:38.144: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jun 11 11:18:40.156: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jun 11 11:18:40.162: INFO: Make sure deployment "test-rollover-deployment" is complete
Jun 11 11:18:40.168: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:40.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471118, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:42.175: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:42.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:44.177: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:44.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:46.178: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:46.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:48.177: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:48.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:50.178: INFO: all replica sets need to contain the pod-template-hash label
Jun 11 11:18:50.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:52.232: INFO: 
Jun 11 11:18:52.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471116, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471122, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:18:54.177: INFO: 
Jun 11 11:18:54.177: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jun 11 11:18:54.184: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-9839 /apis/apps/v1/namespaces/deployment-9839/deployments/test-rollover-deployment abdd4527-664a-4de4-8d05-8247d5a0363a 11511795 2 2020-06-11 11:18:33 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-06-11 11:18:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:18:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00441e108  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-11 11:18:36 +0000 UTC,LastTransitionTime:2020-06-11 11:18:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-06-11 11:18:52 +0000 UTC,LastTransitionTime:2020-06-11 11:18:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jun 11 11:18:54.188: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-9839 /apis/apps/v1/namespaces/deployment-9839/replicasets/test-rollover-deployment-84f7f6f64b 7d20a90d-dda4-43a8-91c6-519261575cdb 11511784 2 2020-06-11 11:18:38 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment abdd4527-664a-4de4-8d05-8247d5a0363a 0xc0043b8827 0xc0043b8828}] []  [{kube-controller-manager Update apps/v1 2020-06-11 11:18:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 98 100 100 52 53 50 55 45 54 54 52 97 45 52 100 101 52 45 56 100 48 53 45 56 50 52 55 100 53 97 48 51 54 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043b88b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:18:54.188: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jun 11 11:18:54.189: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-9839 /apis/apps/v1/namespaces/deployment-9839/replicasets/test-rollover-controller 3cafa15a-9936-4a33-8694-2e6c3d407dee 11511794 2 2020-06-11 11:18:26 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment abdd4527-664a-4de4-8d05-8247d5a0363a 0xc0043b85ef 0xc0043b8600}] []  [{e2e.test Update apps/v1 2020-06-11 11:18:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:18:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 98 100 100 52 53 50 55 45 54 54 52 97 45 52 100 101 52 45 56 100 48 53 45 56 50 52 55 100 53 97 48 51 54 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043b86b8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:18:54.189: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-9839 /apis/apps/v1/namespaces/deployment-9839/replicasets/test-rollover-deployment-5686c4cfd5 44ae3ae1-8e96-480f-bbb9-779ec3bfe6b2 11511731 2 2020-06-11 11:18:34 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment abdd4527-664a-4de4-8d05-8247d5a0363a 0xc0043b8727 0xc0043b8728}] []  [{kube-controller-manager Update apps/v1 2020-06-11 11:18:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 98 100 100 52 53 50 55 45 54 54 52 97 45 52 100 101 52 45 56 100 48 53 45 56 50 52 55 100 53 97 48 51 54 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043b87b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:18:54.192: INFO: Pod "test-rollover-deployment-84f7f6f64b-97xfz" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-97xfz test-rollover-deployment-84f7f6f64b- deployment-9839 /api/v1/namespaces/deployment-9839/pods/test-rollover-deployment-84f7f6f64b-97xfz 93a8d708-40c8-4435-b17e-e24fef4834b9 11511748 0 2020-06-11 11:18:38 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 7d20a90d-dda4-43a8-91c6-519261575cdb 0xc00441e547 0xc00441e548}] []  [{kube-controller-manager Update v1 2020-06-11 11:18:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 100 50 48 97 57 48 100 45 100 100 97 52 45 52 51 97 56 45 57 49 99 54 45 53 49 57 50 54 49 53 55 53 99 100 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:18:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tfqsn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tfqsn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tfqsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.175,StartTime:2020-06-11 11:18:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:18:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9e7efd688f6744fd2a8d015f021f1ec55a88d49d259faa24ffc6a7699f1a6211,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:18:54.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9839" for this suite.

• [SLOW TEST:27.337 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":76,"skipped":1279,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:18:54.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8241
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8241
STEP: Creating statefulset with conflicting port in namespace statefulset-8241
STEP: Waiting until pod test-pod will start running in namespace statefulset-8241
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8241
Jun 11 11:19:00.529: INFO: Observed stateful pod in namespace: statefulset-8241, name: ss-0, uid: dbbea262-2511-4232-9f85-e48833769887, status phase: Pending. Waiting for statefulset controller to delete.
Jun 11 11:19:01.152: INFO: Observed stateful pod in namespace: statefulset-8241, name: ss-0, uid: dbbea262-2511-4232-9f85-e48833769887, status phase: Failed. Waiting for statefulset controller to delete.
Jun 11 11:19:01.182: INFO: Observed stateful pod in namespace: statefulset-8241, name: ss-0, uid: dbbea262-2511-4232-9f85-e48833769887, status phase: Failed. Waiting for statefulset controller to delete.
Jun 11 11:19:01.190: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8241
STEP: Removing pod with conflicting port in namespace statefulset-8241
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8241 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jun 11 11:19:07.285: INFO: Deleting all statefulset in ns statefulset-8241
Jun 11 11:19:07.288: INFO: Scaling statefulset ss to 0
Jun 11 11:19:17.328: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 11:19:17.332: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:17.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8241" for this suite.

• [SLOW TEST:23.158 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":77,"skipped":1298,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:17.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jun 11 11:19:17.429: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 11 11:19:17.444: INFO: Waiting for terminating namespaces to be deleted...
Jun 11 11:19:17.446: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jun 11 11:19:17.460: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:19:17.460: INFO: 	Container kindnet-cni ready: true, restart count 3
Jun 11 11:19:17.460: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:19:17.460: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 11 11:19:17.460: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jun 11 11:19:17.464: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:19:17.464: INFO: 	Container kindnet-cni ready: true, restart count 2
Jun 11 11:19:17.464: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:19:17.464: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Jun 11 11:19:17.573: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
Jun 11 11:19:17.573: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
Jun 11 11:19:17.573: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
Jun 11 11:19:17.573: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Jun 11 11:19:17.573: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Jun 11 11:19:17.578: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-804930bf-b564-40d8-a416-cb457c2cffad.1617794b4e4cb16f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-124/filler-pod-804930bf-b564-40d8-a416-cb457c2cffad to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-804930bf-b564-40d8-a416-cb457c2cffad.1617794bdf52488c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-804930bf-b564-40d8-a416-cb457c2cffad.1617794c1d2b7e4b], Reason = [Created], Message = [Created container filler-pod-804930bf-b564-40d8-a416-cb457c2cffad]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-804930bf-b564-40d8-a416-cb457c2cffad.1617794c2c9156ca], Reason = [Started], Message = [Started container filler-pod-804930bf-b564-40d8-a416-cb457c2cffad]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-956561a3-e702-4a51-ae50-b58071198dbd.1617794b4b417c4c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-124/filler-pod-956561a3-e702-4a51-ae50-b58071198dbd to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-956561a3-e702-4a51-ae50-b58071198dbd.1617794b9c06341b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-956561a3-e702-4a51-ae50-b58071198dbd.1617794be5b7fcbf], Reason = [Created], Message = [Created container filler-pod-956561a3-e702-4a51-ae50-b58071198dbd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-956561a3-e702-4a51-ae50-b58071198dbd.1617794c02f7e3e9], Reason = [Started], Message = [Started container filler-pod-956561a3-e702-4a51-ae50-b58071198dbd]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1617794cb5c60dd7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1617794cbb0dc512], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:24.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-124" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.413 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":78,"skipped":1298,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:24.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 11 11:19:29.412: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:30.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5253" for this suite.

• [SLOW TEST:5.420 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1324,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:30.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:19:30.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471" in namespace "projected-9855" to be "Succeeded or Failed"
Jun 11 11:19:30.352: INFO: Pod "downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471": Phase="Pending", Reason="", readiness=false. Elapsed: 32.289048ms
Jun 11 11:19:32.371: INFO: Pod "downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051492045s
Jun 11 11:19:34.375: INFO: Pod "downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055154425s
STEP: Saw pod success
Jun 11 11:19:34.375: INFO: Pod "downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471" satisfied condition "Succeeded or Failed"
Jun 11 11:19:34.379: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471 container client-container: 
STEP: delete the pod
Jun 11 11:19:34.677: INFO: Waiting for pod downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471 to disappear
Jun 11 11:19:34.680: INFO: Pod downwardapi-volume-e69cf151-cb17-4d78-a01d-0514fdadb471 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9855" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1326,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:34.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2512
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2512
I0611 11:19:35.130992       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2512, replica count: 2
I0611 11:19:38.181541       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:19:41.181810       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 11 11:19:41.181: INFO: Creating new exec pod
Jun 11 11:19:46.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2512 execpodjv6jv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jun 11 11:19:50.947: INFO: stderr: "I0611 11:19:50.822648    1514 log.go:172] (0xc0008bc790) (0xc00069b5e0) Create stream\nI0611 11:19:50.822699    1514 log.go:172] (0xc0008bc790) (0xc00069b5e0) Stream added, broadcasting: 1\nI0611 11:19:50.830962    1514 log.go:172] (0xc0008bc790) Reply frame received for 1\nI0611 11:19:50.831032    1514 log.go:172] (0xc0008bc790) (0xc000740000) Create stream\nI0611 11:19:50.831049    1514 log.go:172] (0xc0008bc790) (0xc000740000) Stream added, broadcasting: 3\nI0611 11:19:50.839430    1514 log.go:172] (0xc0008bc790) Reply frame received for 3\nI0611 11:19:50.839469    1514 log.go:172] (0xc0008bc790) (0xc000744000) Create stream\nI0611 11:19:50.839477    1514 log.go:172] (0xc0008bc790) (0xc000744000) Stream added, broadcasting: 5\nI0611 11:19:50.840468    1514 log.go:172] (0xc0008bc790) Reply frame received for 5\nI0611 11:19:50.905381    1514 log.go:172] (0xc0008bc790) Data frame received for 5\nI0611 11:19:50.905414    1514 log.go:172] (0xc000744000) (5) Data frame handling\nI0611 11:19:50.905435    1514 log.go:172] (0xc000744000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0611 11:19:50.936164    1514 log.go:172] (0xc0008bc790) Data frame received for 3\nI0611 11:19:50.936201    1514 log.go:172] (0xc000740000) (3) Data frame handling\nI0611 11:19:50.936232    1514 log.go:172] (0xc0008bc790) Data frame received for 5\nI0611 11:19:50.936266    1514 log.go:172] (0xc000744000) (5) Data frame handling\nI0611 11:19:50.936422    1514 log.go:172] (0xc000744000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0611 11:19:50.936634    1514 log.go:172] (0xc0008bc790) Data frame received for 5\nI0611 11:19:50.936662    1514 log.go:172] (0xc000744000) (5) Data frame handling\nI0611 11:19:50.939382    1514 log.go:172] (0xc0008bc790) Data frame received for 1\nI0611 11:19:50.939408    1514 log.go:172] (0xc00069b5e0) (1) Data frame handling\nI0611 11:19:50.939429    1514 log.go:172] (0xc00069b5e0) (1) Data frame sent\nI0611 11:19:50.939445    1514 log.go:172] (0xc0008bc790) (0xc00069b5e0) Stream removed, broadcasting: 1\nI0611 11:19:50.939569    1514 log.go:172] (0xc0008bc790) Go away received\nI0611 11:19:50.939943    1514 log.go:172] (0xc0008bc790) (0xc00069b5e0) Stream removed, broadcasting: 1\nI0611 11:19:50.939966    1514 log.go:172] (0xc0008bc790) (0xc000740000) Stream removed, broadcasting: 3\nI0611 11:19:50.939979    1514 log.go:172] (0xc0008bc790) (0xc000744000) Stream removed, broadcasting: 5\n"
Jun 11 11:19:50.947: INFO: stdout: ""
Jun 11 11:19:50.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2512 execpodjv6jv -- /bin/sh -x -c nc -zv -t -w 2 10.104.164.177 80'
Jun 11 11:19:51.153: INFO: stderr: "I0611 11:19:51.071898    1546 log.go:172] (0xc00003a580) (0xc0008da000) Create stream\nI0611 11:19:51.071997    1546 log.go:172] (0xc00003a580) (0xc0008da000) Stream added, broadcasting: 1\nI0611 11:19:51.075307    1546 log.go:172] (0xc00003a580) Reply frame received for 1\nI0611 11:19:51.075352    1546 log.go:172] (0xc00003a580) (0xc0008da1e0) Create stream\nI0611 11:19:51.075363    1546 log.go:172] (0xc00003a580) (0xc0008da1e0) Stream added, broadcasting: 3\nI0611 11:19:51.076259    1546 log.go:172] (0xc00003a580) Reply frame received for 3\nI0611 11:19:51.076304    1546 log.go:172] (0xc00003a580) (0xc000a5e000) Create stream\nI0611 11:19:51.076334    1546 log.go:172] (0xc00003a580) (0xc000a5e000) Stream added, broadcasting: 5\nI0611 11:19:51.077699    1546 log.go:172] (0xc00003a580) Reply frame received for 5\nI0611 11:19:51.144193    1546 log.go:172] (0xc00003a580) Data frame received for 5\nI0611 11:19:51.144247    1546 log.go:172] (0xc000a5e000) (5) Data frame handling\nI0611 11:19:51.144271    1546 log.go:172] (0xc000a5e000) (5) Data frame sent\nI0611 11:19:51.144301    1546 log.go:172] (0xc00003a580) Data frame received for 5\nI0611 11:19:51.144338    1546 log.go:172] (0xc000a5e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.164.177 80\nConnection to 10.104.164.177 80 port [tcp/http] succeeded!\nI0611 11:19:51.144389    1546 log.go:172] (0xc00003a580) Data frame received for 3\nI0611 11:19:51.144409    1546 log.go:172] (0xc0008da1e0) (3) Data frame handling\nI0611 11:19:51.147073    1546 log.go:172] (0xc00003a580) Data frame received for 1\nI0611 11:19:51.147105    1546 log.go:172] (0xc0008da000) (1) Data frame handling\nI0611 11:19:51.147127    1546 log.go:172] (0xc0008da000) (1) Data frame sent\nI0611 11:19:51.147147    1546 log.go:172] (0xc00003a580) (0xc0008da000) Stream removed, broadcasting: 1\nI0611 11:19:51.147231    1546 log.go:172] (0xc00003a580) Go away received\nI0611 11:19:51.147511    1546 log.go:172] (0xc00003a580) (0xc0008da000) Stream removed, broadcasting: 1\nI0611 11:19:51.147534    1546 log.go:172] (0xc00003a580) (0xc0008da1e0) Stream removed, broadcasting: 3\nI0611 11:19:51.147545    1546 log.go:172] (0xc00003a580) (0xc000a5e000) Stream removed, broadcasting: 5\n"
Jun 11 11:19:51.153: INFO: stdout: ""
Jun 11 11:19:51.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2512 execpodjv6jv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30557'
Jun 11 11:19:51.376: INFO: stderr: "I0611 11:19:51.296534    1566 log.go:172] (0xc000aca8f0) (0xc000a900a0) Create stream\nI0611 11:19:51.296582    1566 log.go:172] (0xc000aca8f0) (0xc000a900a0) Stream added, broadcasting: 1\nI0611 11:19:51.299275    1566 log.go:172] (0xc000aca8f0) Reply frame received for 1\nI0611 11:19:51.299312    1566 log.go:172] (0xc000aca8f0) (0xc0006bb220) Create stream\nI0611 11:19:51.299324    1566 log.go:172] (0xc000aca8f0) (0xc0006bb220) Stream added, broadcasting: 3\nI0611 11:19:51.300190    1566 log.go:172] (0xc000aca8f0) Reply frame received for 3\nI0611 11:19:51.300221    1566 log.go:172] (0xc000aca8f0) (0xc000a901e0) Create stream\nI0611 11:19:51.300240    1566 log.go:172] (0xc000aca8f0) (0xc000a901e0) Stream added, broadcasting: 5\nI0611 11:19:51.301062    1566 log.go:172] (0xc000aca8f0) Reply frame received for 5\nI0611 11:19:51.368403    1566 log.go:172] (0xc000aca8f0) Data frame received for 5\nI0611 11:19:51.368431    1566 log.go:172] (0xc000a901e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30557\nConnection to 172.17.0.15 30557 port [tcp/30557] succeeded!\nI0611 11:19:51.368449    1566 log.go:172] (0xc000aca8f0) Data frame received for 3\nI0611 11:19:51.368484    1566 log.go:172] (0xc0006bb220) (3) Data frame handling\nI0611 11:19:51.368515    1566 log.go:172] (0xc000a901e0) (5) Data frame sent\nI0611 11:19:51.368534    1566 log.go:172] (0xc000aca8f0) Data frame received for 5\nI0611 11:19:51.368550    1566 log.go:172] (0xc000a901e0) (5) Data frame handling\nI0611 11:19:51.369932    1566 log.go:172] (0xc000aca8f0) Data frame received for 1\nI0611 11:19:51.369958    1566 log.go:172] (0xc000a900a0) (1) Data frame handling\nI0611 11:19:51.369975    1566 log.go:172] (0xc000a900a0) (1) Data frame sent\nI0611 11:19:51.369990    1566 log.go:172] (0xc000aca8f0) (0xc000a900a0) Stream removed, broadcasting: 1\nI0611 11:19:51.370006    1566 log.go:172] (0xc000aca8f0) Go away received\nI0611 11:19:51.370362    1566 log.go:172] (0xc000aca8f0) (0xc000a900a0) Stream removed, broadcasting: 1\nI0611 11:19:51.370385    1566 log.go:172] (0xc000aca8f0) (0xc0006bb220) Stream removed, broadcasting: 3\nI0611 11:19:51.370395    1566 log.go:172] (0xc000aca8f0) (0xc000a901e0) Stream removed, broadcasting: 5\n"
Jun 11 11:19:51.376: INFO: stdout: ""
Jun 11 11:19:51.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2512 execpodjv6jv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30557'
Jun 11 11:19:51.601: INFO: stderr: "I0611 11:19:51.513305    1587 log.go:172] (0xc0000e00b0) (0xc000a3e000) Create stream\nI0611 11:19:51.513401    1587 log.go:172] (0xc0000e00b0) (0xc000a3e000) Stream added, broadcasting: 1\nI0611 11:19:51.518941    1587 log.go:172] (0xc0000e00b0) Reply frame received for 1\nI0611 11:19:51.518984    1587 log.go:172] (0xc0000e00b0) (0xc0006137c0) Create stream\nI0611 11:19:51.518992    1587 log.go:172] (0xc0000e00b0) (0xc0006137c0) Stream added, broadcasting: 3\nI0611 11:19:51.519711    1587 log.go:172] (0xc0000e00b0) Reply frame received for 3\nI0611 11:19:51.519751    1587 log.go:172] (0xc0000e00b0) (0xc000480c80) Create stream\nI0611 11:19:51.519768    1587 log.go:172] (0xc0000e00b0) (0xc000480c80) Stream added, broadcasting: 5\nI0611 11:19:51.520405    1587 log.go:172] (0xc0000e00b0) Reply frame received for 5\nI0611 11:19:51.592629    1587 log.go:172] (0xc0000e00b0) Data frame received for 3\nI0611 11:19:51.592654    1587 log.go:172] (0xc0006137c0) (3) Data frame handling\nI0611 11:19:51.592686    1587 log.go:172] (0xc0000e00b0) Data frame received for 5\nI0611 11:19:51.592711    1587 log.go:172] (0xc000480c80) (5) Data frame handling\nI0611 11:19:51.592731    1587 log.go:172] (0xc000480c80) (5) Data frame sent\nI0611 11:19:51.592754    1587 log.go:172] (0xc0000e00b0) Data frame received for 5\nI0611 11:19:51.592765    1587 log.go:172] (0xc000480c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30557\nConnection to 172.17.0.18 30557 port [tcp/30557] succeeded!\nI0611 11:19:51.594029    1587 log.go:172] (0xc0000e00b0) Data frame received for 1\nI0611 11:19:51.594050    1587 log.go:172] (0xc000a3e000) (1) Data frame handling\nI0611 11:19:51.594062    1587 log.go:172] (0xc000a3e000) (1) Data frame sent\nI0611 11:19:51.594073    1587 log.go:172] (0xc0000e00b0) (0xc000a3e000) Stream removed, broadcasting: 1\nI0611 11:19:51.594086    1587 log.go:172] (0xc0000e00b0) Go away received\nI0611 11:19:51.594479    1587 log.go:172] (0xc0000e00b0) (0xc000a3e000) Stream removed, broadcasting: 1\nI0611 11:19:51.594514    1587 log.go:172] (0xc0000e00b0) (0xc0006137c0) Stream removed, broadcasting: 3\nI0611 11:19:51.594524    1587 log.go:172] (0xc0000e00b0) (0xc000480c80) Stream removed, broadcasting: 5\n"
Jun 11 11:19:51.601: INFO: stdout: ""
Jun 11 11:19:51.601: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:51.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2512" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.214 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":81,"skipped":1330,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:52.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jun 11 11:19:52.338: INFO: Waiting up to 5m0s for pod "downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9" in namespace "downward-api-823" to be "Succeeded or Failed"
Jun 11 11:19:52.372: INFO: Pod "downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9": Phase="Pending", Reason="", readiness=false. Elapsed: 33.793687ms
Jun 11 11:19:54.375: INFO: Pod "downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037257606s
Jun 11 11:19:56.383: INFO: Pod "downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045396305s
STEP: Saw pod success
Jun 11 11:19:56.384: INFO: Pod "downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9" satisfied condition "Succeeded or Failed"
Jun 11 11:19:56.387: INFO: Trying to get logs from node kali-worker2 pod downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9 container dapi-container: 
STEP: delete the pod
Jun 11 11:19:56.453: INFO: Waiting for pod downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9 to disappear
Jun 11 11:19:56.457: INFO: Pod downward-api-3b637a67-55c8-4fb1-a31a-1cdca751c4f9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:19:56.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-823" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1331,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:19:56.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-36010cf1-2510-4c30-b80a-77490a6267cf
STEP: Creating a pod to test consume configMaps
Jun 11 11:19:58.899: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d" in namespace "projected-5341" to be "Succeeded or Failed"
Jun 11 11:19:58.944: INFO: Pod "pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.077462ms
Jun 11 11:20:00.948: INFO: Pod "pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049030096s
Jun 11 11:20:02.952: INFO: Pod "pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053234155s
STEP: Saw pod success
Jun 11 11:20:02.952: INFO: Pod "pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d" satisfied condition "Succeeded or Failed"
Jun 11 11:20:02.955: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d container projected-configmap-volume-test: 
STEP: delete the pod
Jun 11 11:20:03.003: INFO: Waiting for pod pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d to disappear
Jun 11 11:20:03.042: INFO: Pod pod-projected-configmaps-733a8a5a-7481-47d8-8ba6-043d3fcdef0d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:03.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5341" for this suite.

• [SLOW TEST:6.734 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1337,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:03.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jun 11 11:20:03.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-74'
Jun 11 11:20:03.769: INFO: stderr: ""
Jun 11 11:20:03.769: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 11 11:20:03.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:03.875: INFO: stderr: ""
Jun 11 11:20:03.875: INFO: stdout: "update-demo-nautilus-6d26c update-demo-nautilus-vd46w "
Jun 11 11:20:03.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:04.021: INFO: stderr: ""
Jun 11 11:20:04.021: INFO: stdout: ""
Jun 11 11:20:04.021: INFO: update-demo-nautilus-6d26c is created but not running
Jun 11 11:20:09.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:09.121: INFO: stderr: ""
Jun 11 11:20:09.121: INFO: stdout: "update-demo-nautilus-6d26c update-demo-nautilus-vd46w "
Jun 11 11:20:09.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:09.214: INFO: stderr: ""
Jun 11 11:20:09.214: INFO: stdout: "true"
Jun 11 11:20:09.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:09.319: INFO: stderr: ""
Jun 11 11:20:09.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:09.319: INFO: validating pod update-demo-nautilus-6d26c
Jun 11 11:20:09.323: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:09.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:09.323: INFO: update-demo-nautilus-6d26c is verified up and running
Jun 11 11:20:09.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vd46w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:09.420: INFO: stderr: ""
Jun 11 11:20:09.420: INFO: stdout: "true"
Jun 11 11:20:09.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vd46w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:09.515: INFO: stderr: ""
Jun 11 11:20:09.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:09.515: INFO: validating pod update-demo-nautilus-vd46w
Jun 11 11:20:09.519: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:09.519: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:09.519: INFO: update-demo-nautilus-vd46w is verified up and running
STEP: scaling down the replication controller
Jun 11 11:20:09.555: INFO: scanned /root for discovery docs: 
Jun 11 11:20:09.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-74'
Jun 11 11:20:10.710: INFO: stderr: ""
Jun 11 11:20:10.710: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 11 11:20:10.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:10.816: INFO: stderr: ""
Jun 11 11:20:10.816: INFO: stdout: "update-demo-nautilus-6d26c update-demo-nautilus-vd46w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jun 11 11:20:15.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:15.919: INFO: stderr: ""
Jun 11 11:20:15.919: INFO: stdout: "update-demo-nautilus-6d26c "
Jun 11 11:20:15.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:16.021: INFO: stderr: ""
Jun 11 11:20:16.021: INFO: stdout: "true"
Jun 11 11:20:16.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:16.125: INFO: stderr: ""
Jun 11 11:20:16.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:16.125: INFO: validating pod update-demo-nautilus-6d26c
Jun 11 11:20:16.129: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:16.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:16.129: INFO: update-demo-nautilus-6d26c is verified up and running
STEP: scaling up the replication controller
Jun 11 11:20:16.131: INFO: scanned /root for discovery docs: 
Jun 11 11:20:16.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-74'
Jun 11 11:20:17.254: INFO: stderr: ""
Jun 11 11:20:17.254: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 11 11:20:17.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:17.343: INFO: stderr: ""
Jun 11 11:20:17.343: INFO: stdout: "update-demo-nautilus-6d26c update-demo-nautilus-l25kf "
Jun 11 11:20:17.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:17.431: INFO: stderr: ""
Jun 11 11:20:17.431: INFO: stdout: "true"
Jun 11 11:20:17.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:17.529: INFO: stderr: ""
Jun 11 11:20:17.529: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:17.529: INFO: validating pod update-demo-nautilus-6d26c
Jun 11 11:20:17.532: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:17.532: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:17.532: INFO: update-demo-nautilus-6d26c is verified up and running
Jun 11 11:20:17.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l25kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:17.645: INFO: stderr: ""
Jun 11 11:20:17.645: INFO: stdout: ""
Jun 11 11:20:17.645: INFO: update-demo-nautilus-l25kf is created but not running
Jun 11 11:20:22.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-74'
Jun 11 11:20:22.765: INFO: stderr: ""
Jun 11 11:20:22.765: INFO: stdout: "update-demo-nautilus-6d26c update-demo-nautilus-l25kf "
Jun 11 11:20:22.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:22.853: INFO: stderr: ""
Jun 11 11:20:22.853: INFO: stdout: "true"
Jun 11 11:20:22.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6d26c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:22.954: INFO: stderr: ""
Jun 11 11:20:22.954: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:22.954: INFO: validating pod update-demo-nautilus-6d26c
Jun 11 11:20:22.958: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:22.958: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:22.958: INFO: update-demo-nautilus-6d26c is verified up and running
Jun 11 11:20:22.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l25kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:23.054: INFO: stderr: ""
Jun 11 11:20:23.054: INFO: stdout: "true"
Jun 11 11:20:23.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l25kf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-74'
Jun 11 11:20:23.135: INFO: stderr: ""
Jun 11 11:20:23.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 11 11:20:23.135: INFO: validating pod update-demo-nautilus-l25kf
Jun 11 11:20:23.138: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 11 11:20:23.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 11 11:20:23.139: INFO: update-demo-nautilus-l25kf is verified up and running
STEP: using delete to clean up resources
Jun 11 11:20:23.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-74'
Jun 11 11:20:23.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:20:23.252: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jun 11 11:20:23.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-74'
Jun 11 11:20:23.349: INFO: stderr: "No resources found in kubectl-74 namespace.\n"
Jun 11 11:20:23.349: INFO: stdout: ""
Jun 11 11:20:23.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-74 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 11 11:20:23.545: INFO: stderr: ""
Jun 11 11:20:23.545: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:23.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-74" for this suite.

• [SLOW TEST:20.356 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":84,"skipped":1346,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:23.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 11 11:20:23.772: INFO: Waiting up to 5m0s for pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167" in namespace "emptydir-7783" to be "Succeeded or Failed"
Jun 11 11:20:23.785: INFO: Pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723832ms
Jun 11 11:20:25.788: INFO: Pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016422711s
Jun 11 11:20:27.793: INFO: Pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021545626s
Jun 11 11:20:29.798: INFO: Pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025983738s
STEP: Saw pod success
Jun 11 11:20:29.798: INFO: Pod "pod-68dd612f-4180-465a-b5ff-c8595faa3167" satisfied condition "Succeeded or Failed"
Jun 11 11:20:29.801: INFO: Trying to get logs from node kali-worker2 pod pod-68dd612f-4180-465a-b5ff-c8595faa3167 container test-container: 
STEP: delete the pod
Jun 11 11:20:29.842: INFO: Waiting for pod pod-68dd612f-4180-465a-b5ff-c8595faa3167 to disappear
Jun 11 11:20:29.854: INFO: Pod pod-68dd612f-4180-465a-b5ff-c8595faa3167 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:29.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7783" for this suite.

• [SLOW TEST:6.347 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1346,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:29.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jun 11 11:20:29.959: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 11 11:20:29.989: INFO: Waiting for terminating namespaces to be deleted...
Jun 11 11:20:29.993: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jun 11 11:20:30.002: INFO: update-demo-nautilus-l25kf from kubectl-74 started at 2020-06-11 11:20:16 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.002: INFO: 	Container update-demo ready: false, restart count 0
Jun 11 11:20:30.002: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.002: INFO: 	Container kindnet-cni ready: true, restart count 3
Jun 11 11:20:30.002: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.002: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 11 11:20:30.002: INFO: update-demo-nautilus-6d26c from kubectl-74 started at 2020-06-11 11:20:03 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.002: INFO: 	Container update-demo ready: false, restart count 0
Jun 11 11:20:30.002: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jun 11 11:20:30.022: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.022: INFO: 	Container kindnet-cni ready: true, restart count 2
Jun 11 11:20:30.022: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:20:30.022: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-04704f89-fef2-463f-b01a-2bd2f28d0403 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-04704f89-fef2-463f-b01a-2bd2f28d0403 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-04704f89-fef2-463f-b01a-2bd2f28d0403
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:38.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2337" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:8.324 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":86,"skipped":1348,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:38.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jun 11 11:20:38.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:53.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1963" for this suite.

• [SLOW TEST:15.691 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":87,"skipped":1354,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:53.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-5684/configmap-test-4baccfa6-f5d4-4168-9ca7-132c5504fb04
STEP: Creating a pod to test consume configMaps
Jun 11 11:20:53.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d" in namespace "configmap-5684" to be "Succeeded or Failed"
Jun 11 11:20:53.986: INFO: Pod "pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594934ms
Jun 11 11:20:56.028: INFO: Pod "pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04561335s
Jun 11 11:20:58.033: INFO: Pod "pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050480565s
STEP: Saw pod success
Jun 11 11:20:58.033: INFO: Pod "pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d" satisfied condition "Succeeded or Failed"
Jun 11 11:20:58.036: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d container env-test: 
STEP: delete the pod
Jun 11 11:20:58.077: INFO: Waiting for pod pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d to disappear
Jun 11 11:20:58.097: INFO: Pod pod-configmaps-7890089c-e499-4028-9f40-6bbcb50cb90d no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:20:58.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5684" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1384,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:20:58.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:21:04.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7608" for this suite.

• [SLOW TEST:6.156 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1389,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:21:04.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4805
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jun 11 11:21:04.385: INFO: Found 0 stateful pods, waiting for 3
Jun 11 11:21:14.391: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:14.391: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:14.391: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jun 11 11:21:24.402: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:24.402: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:24.402: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jun 11 11:21:24.430: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jun 11 11:21:34.490: INFO: Updating stateful set ss2
Jun 11 11:21:34.543: INFO: Waiting for Pod statefulset-4805/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jun 11 11:21:44.723: INFO: Found 2 stateful pods, waiting for 3
Jun 11 11:21:54.731: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:54.731: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 11:21:54.731: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jun 11 11:21:54.752: INFO: Updating stateful set ss2
Jun 11 11:21:54.783: INFO: Waiting for Pod statefulset-4805/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jun 11 11:22:04.811: INFO: Updating stateful set ss2
Jun 11 11:22:04.850: INFO: Waiting for StatefulSet statefulset-4805/ss2 to complete update
Jun 11 11:22:04.850: INFO: Waiting for Pod statefulset-4805/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jun 11 11:22:14.858: INFO: Waiting for StatefulSet statefulset-4805/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jun 11 11:22:24.856: INFO: Deleting all statefulset in ns statefulset-4805
Jun 11 11:22:24.859: INFO: Scaling statefulset ss2 to 0
Jun 11 11:22:44.904: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 11:22:44.908: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:22:44.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4805" for this suite.

• [SLOW TEST:100.693 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":90,"skipped":1390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:22:44.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jun 11 11:22:45.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513344 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:22:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:22:45.023: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513344 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:22:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jun 11 11:22:55.032: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513433 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:22:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:22:55.032: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513433 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:22:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jun 11 11:23:05.040: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513460 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:23:05.041: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513460 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jun 11 11:23:15.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513490 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:23:15.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-a 77086b1d-5ba8-43ae-8839-d801a4be4ecf 11513490 0 2020-06-11 11:22:45 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jun 11 11:23:25.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-b 3d307bdf-d82d-4751-a3d5-77d136e8cb34 11513520 0 2020-06-11 11:23:25 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:23:25.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-b 3d307bdf-d82d-4751-a3d5-77d136e8cb34 11513520 0 2020-06-11 11:23:25 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jun 11 11:23:35.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-b 3d307bdf-d82d-4751-a3d5-77d136e8cb34 11513550 0 2020-06-11 11:23:25 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:23:35.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-configmap-b 3d307bdf-d82d-4751-a3d5-77d136e8cb34 11513550 0 2020-06-11 11:23:25 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-11 11:23:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:23:45.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1943" for this suite.

• [SLOW TEST:60.117 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":91,"skipped":1427,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:23:45.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jun 11 11:23:52.395: INFO: 10 pods remaining
Jun 11 11:23:52.395: INFO: 10 pods has nil DeletionTimestamp
Jun 11 11:23:52.395: INFO: 
Jun 11 11:23:54.492: INFO: 0 pods remaining
Jun 11 11:23:54.492: INFO: 0 pods has nil DeletionTimestamp
Jun 11 11:23:54.492: INFO: 
STEP: Gathering metrics
W0611 11:23:55.270320       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 11:23:55.270: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:23:55.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7394" for this suite.

• [SLOW TEST:11.229 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":92,"skipped":1449,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:23:56.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:14.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2590" for this suite.

• [SLOW TEST:18.110 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":93,"skipped":1452,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:14.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-6f37d5d4-9afa-49e8-abb3-2e319f9748e6
STEP: Creating a pod to test consume secrets
Jun 11 11:24:14.471: INFO: Waiting up to 5m0s for pod "pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4" in namespace "secrets-5449" to be "Succeeded or Failed"
Jun 11 11:24:14.498: INFO: Pod "pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.534028ms
Jun 11 11:24:16.512: INFO: Pod "pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041112217s
Jun 11 11:24:18.516: INFO: Pod "pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045058563s
STEP: Saw pod success
Jun 11 11:24:18.516: INFO: Pod "pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4" satisfied condition "Succeeded or Failed"
Jun 11 11:24:18.519: INFO: Trying to get logs from node kali-worker pod pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:24:18.704: INFO: Waiting for pod pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4 to disappear
Jun 11 11:24:18.727: INFO: Pod pod-secrets-5936c36e-0153-4571-9234-bb352329a4c4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:18.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5449" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1453,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:18.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:19.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2868" for this suite.
STEP: Destroying namespace "nspatchtest-04e439b3-83d6-4a2a-89c5-06a095b1118f-4195" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":95,"skipped":1462,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:19.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jun 11 11:24:19.292: INFO: >>> kubeConfig: /root/.kube/config
Jun 11 11:24:22.256: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:32.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-125" for this suite.

• [SLOW TEST:12.969 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":96,"skipped":1527,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:32.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:24:32.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb" in namespace "downward-api-3782" to be "Succeeded or Failed"
Jun 11 11:24:32.194: INFO: Pod "downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842738ms
Jun 11 11:24:34.198: INFO: Pod "downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00709207s
Jun 11 11:24:36.203: INFO: Pod "downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011975324s
STEP: Saw pod success
Jun 11 11:24:36.203: INFO: Pod "downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb" satisfied condition "Succeeded or Failed"
Jun 11 11:24:36.282: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb container client-container: 
STEP: delete the pod
Jun 11 11:24:36.444: INFO: Waiting for pod downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb to disappear
Jun 11 11:24:36.458: INFO: Pod downwardapi-volume-fadcfb3d-f58a-4e48-8494-59a5005d5cfb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:36.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3782" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:36.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:24:36.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7366" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":98,"skipped":1551,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:24:36.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1719.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1719.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1719.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.156.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.156.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.156.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.156.90_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1719.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1719.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1719.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1719.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1719.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.156.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.156.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.156.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.156.90_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:24:45.007: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.010: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.014: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.017: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.041: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.044: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.047: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.050: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:45.069: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:24:50.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.083: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.106: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.109: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.112: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.115: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:50.133: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:24:55.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.111: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.117: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.120: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:24:55.140: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:25:00.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.078: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.079: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.094: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.097: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:00.114: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:25:05.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.081: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.084: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.102: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.107: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:05.128: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:25:10.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.107: INFO: Unable to read jessie_udp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.110: INFO: Unable to read jessie_tcp@dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.113: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.115: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local from pod dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c: the server could not find the requested resource (get pods dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c)
Jun 11 11:25:10.131: INFO: Lookups using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c failed for: [wheezy_udp@dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@dns-test-service.dns-1719.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_udp@dns-test-service.dns-1719.svc.cluster.local jessie_tcp@dns-test-service.dns-1719.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1719.svc.cluster.local]

Jun 11 11:25:15.134: INFO: DNS probes using dns-1719/dns-test-5c6cadde-4705-4e4d-8cae-045bc674433c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:25:16.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1719" for this suite.

• [SLOW TEST:39.782 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":99,"skipped":1604,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:25:16.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jun 11 11:25:16.561: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Jun 11 11:25:17.206: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jun 11 11:25:19.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:25:22.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:25:23.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471517, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:25:26.474: INFO: Waited 624.929638ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:25:27.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8078" for this suite.

• [SLOW TEST:11.371 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":100,"skipped":1611,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:25:27.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-215f4c6b-6edc-43e9-9509-774d80ca57d1
STEP: Creating secret with name s-test-opt-upd-5a2b440b-e785-414c-86a1-4d983df57943
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-215f4c6b-6edc-43e9-9509-774d80ca57d1
STEP: Updating secret s-test-opt-upd-5a2b440b-e785-414c-86a1-4d983df57943
STEP: Creating secret with name s-test-opt-create-8dc097a7-9c9f-475a-93ab-80e9ffbb39ce
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:26:44.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2244" for this suite.

• [SLOW TEST:77.151 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1613,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:26:44.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 11 11:26:49.602: INFO: Successfully updated pod "pod-update-ef230619-4600-47fb-b800-b19126fe91d0"
STEP: verifying the updated pod is in kubernetes
Jun 11 11:26:49.618: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:26:49.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3670" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1626,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:26:49.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Jun 11 11:26:56.508: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7808 pod-service-account-69abf948-b8d2-471b-89fa-4260f919247d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jun 11 11:26:56.739: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7808 pod-service-account-69abf948-b8d2-471b-89fa-4260f919247d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jun 11 11:26:56.959: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7808 pod-service-account-69abf948-b8d2-471b-89fa-4260f919247d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:26:57.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7808" for this suite.

• [SLOW TEST:7.564 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":103,"skipped":1645,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:26:57.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:26:57.278: INFO: Waiting up to 5m0s for pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489" in namespace "downward-api-8322" to be "Succeeded or Failed"
Jun 11 11:26:57.280: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531328ms
Jun 11 11:26:59.391: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113635356s
Jun 11 11:27:01.595: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316798198s
Jun 11 11:27:04.303: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025400531s
Jun 11 11:27:06.552: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Pending", Reason="", readiness=false. Elapsed: 9.274507042s
Jun 11 11:27:08.612: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Running", Reason="", readiness=true. Elapsed: 11.333929568s
Jun 11 11:27:10.630: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.352629389s
STEP: Saw pod success
Jun 11 11:27:10.631: INFO: Pod "downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489" satisfied condition "Succeeded or Failed"
Jun 11 11:27:10.633: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489 container client-container: 
STEP: delete the pod
Jun 11 11:27:10.672: INFO: Waiting for pod downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489 to disappear
Jun 11 11:27:10.682: INFO: Pod downwardapi-volume-959c00ab-13a3-4823-96e9-fcca1aad2489 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:27:10.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8322" for this suite.

• [SLOW TEST:13.499 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1647,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:27:10.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-b9jw
STEP: Creating a pod to test atomic-volume-subpath
Jun 11 11:27:10.780: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-b9jw" in namespace "subpath-3788" to be "Succeeded or Failed"
Jun 11 11:27:10.798: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.783272ms
Jun 11 11:27:12.822: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042537694s
Jun 11 11:27:14.889: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109163216s
Jun 11 11:27:16.893: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 6.113176094s
Jun 11 11:27:19.288: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 8.508506711s
Jun 11 11:27:21.292: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 10.511904682s
Jun 11 11:27:23.481: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 12.701663865s
Jun 11 11:27:25.485: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 14.705370276s
Jun 11 11:27:27.675: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 16.895373398s
Jun 11 11:27:29.929: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 19.149338944s
Jun 11 11:27:32.050: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 21.270210513s
Jun 11 11:27:34.054: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Running", Reason="", readiness=true. Elapsed: 23.274595536s
Jun 11 11:27:36.242: INFO: Pod "pod-subpath-test-projected-b9jw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.461951081s
STEP: Saw pod success
Jun 11 11:27:36.242: INFO: Pod "pod-subpath-test-projected-b9jw" satisfied condition "Succeeded or Failed"
Jun 11 11:27:36.245: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-b9jw container test-container-subpath-projected-b9jw: 
STEP: delete the pod
Jun 11 11:27:36.676: INFO: Waiting for pod pod-subpath-test-projected-b9jw to disappear
Jun 11 11:27:36.707: INFO: Pod pod-subpath-test-projected-b9jw no longer exists
STEP: Deleting pod pod-subpath-test-projected-b9jw
Jun 11 11:27:36.707: INFO: Deleting pod "pod-subpath-test-projected-b9jw" in namespace "subpath-3788"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:27:36.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3788" for this suite.

• [SLOW TEST:26.223 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":105,"skipped":1677,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:27:36.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9259
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9259
STEP: creating replication controller externalsvc in namespace services-9259
I0611 11:27:38.031991       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9259, replica count: 2
I0611 11:27:41.082453       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:27:44.082693       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:27:47.082950       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jun 11 11:27:47.123: INFO: Creating new exec pod
Jun 11 11:27:51.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9259 execpodpzg5h -- /bin/sh -x -c nslookup clusterip-service'
Jun 11 11:27:51.460: INFO: stderr: "I0611 11:27:51.275480    2208 log.go:172] (0xc0000e0210) (0xc0006ad4a0) Create stream\nI0611 11:27:51.275548    2208 log.go:172] (0xc0000e0210) (0xc0006ad4a0) Stream added, broadcasting: 1\nI0611 11:27:51.277842    2208 log.go:172] (0xc0000e0210) Reply frame received for 1\nI0611 11:27:51.277877    2208 log.go:172] (0xc0000e0210) (0xc000940000) Create stream\nI0611 11:27:51.277891    2208 log.go:172] (0xc0000e0210) (0xc000940000) Stream added, broadcasting: 3\nI0611 11:27:51.278876    2208 log.go:172] (0xc0000e0210) Reply frame received for 3\nI0611 11:27:51.278920    2208 log.go:172] (0xc0000e0210) (0xc000424000) Create stream\nI0611 11:27:51.278942    2208 log.go:172] (0xc0000e0210) (0xc000424000) Stream added, broadcasting: 5\nI0611 11:27:51.279729    2208 log.go:172] (0xc0000e0210) Reply frame received for 5\nI0611 11:27:51.356064    2208 log.go:172] (0xc0000e0210) Data frame received for 5\nI0611 11:27:51.356106    2208 log.go:172] (0xc000424000) (5) Data frame handling\nI0611 11:27:51.356146    2208 log.go:172] (0xc000424000) (5) Data frame sent\n+ nslookup clusterip-service\nI0611 11:27:51.450450    2208 log.go:172] (0xc0000e0210) Data frame received for 3\nI0611 11:27:51.450482    2208 log.go:172] (0xc000940000) (3) Data frame handling\nI0611 11:27:51.450496    2208 log.go:172] (0xc000940000) (3) Data frame sent\nI0611 11:27:51.451481    2208 log.go:172] (0xc0000e0210) Data frame received for 3\nI0611 11:27:51.451502    2208 log.go:172] (0xc000940000) (3) Data frame handling\nI0611 11:27:51.451519    2208 log.go:172] (0xc000940000) (3) Data frame sent\nI0611 11:27:51.451937    2208 log.go:172] (0xc0000e0210) Data frame received for 3\nI0611 11:27:51.451962    2208 log.go:172] (0xc000940000) (3) Data frame handling\nI0611 11:27:51.452209    2208 log.go:172] (0xc0000e0210) Data frame received for 5\nI0611 11:27:51.452247    2208 log.go:172] (0xc000424000) (5) Data frame handling\nI0611 11:27:51.454401    2208 log.go:172] (0xc0000e0210) Data frame received for 1\nI0611 11:27:51.454437    2208 log.go:172] (0xc0006ad4a0) (1) Data frame handling\nI0611 11:27:51.454480    2208 log.go:172] (0xc0006ad4a0) (1) Data frame sent\nI0611 11:27:51.454514    2208 log.go:172] (0xc0000e0210) (0xc0006ad4a0) Stream removed, broadcasting: 1\nI0611 11:27:51.454540    2208 log.go:172] (0xc0000e0210) Go away received\nI0611 11:27:51.455056    2208 log.go:172] (0xc0000e0210) (0xc0006ad4a0) Stream removed, broadcasting: 1\nI0611 11:27:51.455081    2208 log.go:172] (0xc0000e0210) (0xc000940000) Stream removed, broadcasting: 3\nI0611 11:27:51.455092    2208 log.go:172] (0xc0000e0210) (0xc000424000) Stream removed, broadcasting: 5\n"
Jun 11 11:27:51.460: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9259.svc.cluster.local\tcanonical name = externalsvc.services-9259.svc.cluster.local.\nName:\texternalsvc.services-9259.svc.cluster.local\nAddress: 10.96.234.98\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9259, will wait for the garbage collector to delete the pods
Jun 11 11:27:51.527: INFO: Deleting ReplicationController externalsvc took: 7.844621ms
Jun 11 11:27:51.827: INFO: Terminating ReplicationController externalsvc pods took: 300.231457ms
Jun 11 11:28:03.462: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:28:03.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9259" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:26.626 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":106,"skipped":1697,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:28:03.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:28:15.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8037" for this suite.

• [SLOW TEST:12.272 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":107,"skipped":1757,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:28:15.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3135
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3135
STEP: creating replication controller externalsvc in namespace services-3135
I0611 11:28:16.254184       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3135, replica count: 2
I0611 11:28:19.304750       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:28:22.305283       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jun 11 11:28:22.358: INFO: Creating new exec pod
Jun 11 11:28:26.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3135 execpod9hpnt -- /bin/sh -x -c nslookup nodeport-service'
Jun 11 11:28:26.620: INFO: stderr: "I0611 11:28:26.502709    2228 log.go:172] (0xc00003a420) (0xc000b1c000) Create stream\nI0611 11:28:26.502774    2228 log.go:172] (0xc00003a420) (0xc000b1c000) Stream added, broadcasting: 1\nI0611 11:28:26.505631    2228 log.go:172] (0xc00003a420) Reply frame received for 1\nI0611 11:28:26.505669    2228 log.go:172] (0xc00003a420) (0xc0004c6000) Create stream\nI0611 11:28:26.505681    2228 log.go:172] (0xc00003a420) (0xc0004c6000) Stream added, broadcasting: 3\nI0611 11:28:26.506561    2228 log.go:172] (0xc00003a420) Reply frame received for 3\nI0611 11:28:26.506597    2228 log.go:172] (0xc00003a420) (0xc0004ca000) Create stream\nI0611 11:28:26.506605    2228 log.go:172] (0xc00003a420) (0xc0004ca000) Stream added, broadcasting: 5\nI0611 11:28:26.507379    2228 log.go:172] (0xc00003a420) Reply frame received for 5\nI0611 11:28:26.603116    2228 log.go:172] (0xc00003a420) Data frame received for 5\nI0611 11:28:26.603174    2228 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0611 11:28:26.603235    2228 log.go:172] (0xc0004ca000) (5) Data frame sent\n+ nslookup nodeport-service\nI0611 11:28:26.611867    2228 log.go:172] (0xc00003a420) Data frame received for 3\nI0611 11:28:26.611896    2228 log.go:172] (0xc0004c6000) (3) Data frame handling\nI0611 11:28:26.611919    2228 log.go:172] (0xc0004c6000) (3) Data frame sent\nI0611 11:28:26.612870    2228 log.go:172] (0xc00003a420) Data frame received for 3\nI0611 11:28:26.612888    2228 log.go:172] (0xc0004c6000) (3) Data frame handling\nI0611 11:28:26.612908    2228 log.go:172] (0xc0004c6000) (3) Data frame sent\nI0611 11:28:26.613530    2228 log.go:172] (0xc00003a420) Data frame received for 5\nI0611 11:28:26.613571    2228 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0611 11:28:26.613610    2228 log.go:172] (0xc00003a420) Data frame received for 3\nI0611 11:28:26.613631    2228 log.go:172] (0xc0004c6000) (3) Data frame handling\nI0611 11:28:26.615470    2228 log.go:172] (0xc00003a420) Data frame received for 1\nI0611 11:28:26.615490    2228 log.go:172] (0xc000b1c000) (1) Data frame handling\nI0611 11:28:26.615501    2228 log.go:172] (0xc000b1c000) (1) Data frame sent\nI0611 11:28:26.615514    2228 log.go:172] (0xc00003a420) (0xc000b1c000) Stream removed, broadcasting: 1\nI0611 11:28:26.615591    2228 log.go:172] (0xc00003a420) Go away received\nI0611 11:28:26.615886    2228 log.go:172] (0xc00003a420) (0xc000b1c000) Stream removed, broadcasting: 1\nI0611 11:28:26.615910    2228 log.go:172] (0xc00003a420) (0xc0004c6000) Stream removed, broadcasting: 3\nI0611 11:28:26.615929    2228 log.go:172] (0xc00003a420) (0xc0004ca000) Stream removed, broadcasting: 5\n"
Jun 11 11:28:26.620: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3135.svc.cluster.local\tcanonical name = externalsvc.services-3135.svc.cluster.local.\nName:\texternalsvc.services-3135.svc.cluster.local\nAddress: 10.96.254.64\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3135, will wait for the garbage collector to delete the pods
Jun 11 11:28:26.680: INFO: Deleting ReplicationController externalsvc took: 6.503773ms
Jun 11 11:28:26.781: INFO: Terminating ReplicationController externalsvc pods took: 100.519417ms
Jun 11 11:28:33.806: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:28:33.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3135" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.062 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":108,"skipped":1777,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:28:33.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-qq27x in namespace proxy-9880
I0611 11:28:33.968498       7 runners.go:190] Created replication controller with name: proxy-service-qq27x, namespace: proxy-9880, replica count: 1
I0611 11:28:35.018908       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:28:36.019159       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:28:37.019412       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:28:38.019666       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:39.019877       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:40.020142       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:41.020421       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:42.020717       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:43.020971       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0611 11:28:44.021427       7 runners.go:190] proxy-service-qq27x Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 11 11:28:44.024: INFO: setup took 10.108374622s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jun 11 11:28:44.032: INFO: (0) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 7.095578ms)
Jun 11 11:28:44.032: INFO: (0) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 7.223868ms)
Jun 11 11:28:44.032: INFO: (0) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 7.357904ms)
Jun 11 11:28:44.037: INFO: (0) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 11.977406ms)
Jun 11 11:28:44.042: INFO: (0) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 17.187472ms)
Jun 11 11:28:44.042: INFO: (0) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 17.187384ms)
Jun 11 11:28:44.042: INFO: (0) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 17.135611ms)
Jun 11 11:28:44.042: INFO: (0) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 17.558953ms)
Jun 11 11:28:44.042: INFO: (0) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 17.178456ms)
Jun 11 11:28:44.043: INFO: (0) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 18.329963ms)
Jun 11 11:28:44.043: INFO: (0) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 18.685235ms)
Jun 11 11:28:44.044: INFO: (0) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 19.51207ms)
Jun 11 11:28:44.044: INFO: (0) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 19.636367ms)
Jun 11 11:28:44.044: INFO: (0) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 19.545269ms)
Jun 11 11:28:44.044: INFO: (0) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test (200; 5.961422ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 5.984606ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 6.065577ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.801678ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 6.847133ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 6.924782ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 6.839926ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 6.849577ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 6.851898ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.879929ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 6.96401ms)
Jun 11 11:28:44.051: INFO: (1) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 7.003831ms)
Jun 11 11:28:44.052: INFO: (1) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 7.4729ms)
Jun 11 11:28:44.052: INFO: (1) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 7.623802ms)
Jun 11 11:28:44.052: INFO: (1) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 4.208612ms)
Jun 11 11:28:44.056: INFO: (2) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 4.288037ms)
Jun 11 11:28:44.057: INFO: (2) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.388484ms)
Jun 11 11:28:44.057: INFO: (2) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.690771ms)
Jun 11 11:28:44.057: INFO: (2) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.877455ms)
Jun 11 11:28:44.057: INFO: (2) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 4.945169ms)
Jun 11 11:28:44.057: INFO: (2) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.894317ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 6.198371ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 6.246519ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 6.2053ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 6.331884ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 6.208367ms)
Jun 11 11:28:44.058: INFO: (2) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 6.292319ms)
Jun 11 11:28:44.061: INFO: (3) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 2.7558ms)
Jun 11 11:28:44.062: INFO: (3) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 2.92223ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.521959ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.650899ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 4.688823ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.638481ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.751721ms)
Jun 11 11:28:44.063: INFO: (3) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.808681ms)
Jun 11 11:28:44.064: INFO: (3) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.966922ms)
Jun 11 11:28:44.064: INFO: (3) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.511742ms)
Jun 11 11:28:44.066: INFO: (3) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 6.874725ms)
Jun 11 11:28:44.066: INFO: (3) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 6.907282ms)
Jun 11 11:28:44.066: INFO: (3) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 7.00284ms)
Jun 11 11:28:44.066: INFO: (3) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 7.012446ms)
Jun 11 11:28:44.066: INFO: (3) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 7.005918ms)
Jun 11 11:28:44.069: INFO: (4) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 3.727303ms)
Jun 11 11:28:44.070: INFO: (4) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 3.859441ms)
Jun 11 11:28:44.070: INFO: (4) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test (200; 4.436473ms)
Jun 11 11:28:44.071: INFO: (4) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 4.846365ms)
Jun 11 11:28:44.078: INFO: (4) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 12.389245ms)
Jun 11 11:28:44.078: INFO: (4) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 12.461125ms)
Jun 11 11:28:44.078: INFO: (4) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 12.538193ms)
Jun 11 11:28:44.078: INFO: (4) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 12.595984ms)
Jun 11 11:28:44.078: INFO: (4) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 12.642059ms)
Jun 11 11:28:44.079: INFO: (4) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 12.890357ms)
Jun 11 11:28:44.079: INFO: (4) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 12.885848ms)
Jun 11 11:28:44.079: INFO: (4) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 12.977919ms)
Jun 11 11:28:44.079: INFO: (4) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 12.918301ms)
Jun 11 11:28:44.079: INFO: (4) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 13.135631ms)
Jun 11 11:28:44.082: INFO: (5) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 3.308354ms)
Jun 11 11:28:44.083: INFO: (5) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 3.639338ms)
Jun 11 11:28:44.083: INFO: (5) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test (200; 4.764219ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 4.863349ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.077473ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.115973ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 5.179119ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.310386ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.203737ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.315618ms)
Jun 11 11:28:44.084: INFO: (5) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 5.30967ms)
Jun 11 11:28:44.085: INFO: (5) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 5.486931ms)
Jun 11 11:28:44.085: INFO: (5) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.724794ms)
Jun 11 11:28:44.085: INFO: (5) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 6.12585ms)
Jun 11 11:28:44.090: INFO: (6) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.256628ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 5.342545ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.396323ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 5.388611ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.543664ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 5.549899ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 5.555539ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.575296ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 5.868764ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.869593ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.943765ms)
Jun 11 11:28:44.091: INFO: (6) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 6.279422ms)
Jun 11 11:28:44.092: INFO: (6) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.451817ms)
Jun 11 11:28:44.092: INFO: (6) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 6.480668ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.327906ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 4.421434ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 4.50825ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.472897ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.535564ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 4.508319ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.539972ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.570254ms)
Jun 11 11:28:44.096: INFO: (7) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 4.346322ms)
Jun 11 11:28:44.103: INFO: (8) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 4.624755ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.981281ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 4.990634ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test (200; 5.078238ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.080256ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.076371ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.186085ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.350636ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.355865ms)
Jun 11 11:28:44.104: INFO: (8) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 5.520177ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.800693ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 4.882642ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.115706ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.226563ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.148556ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.241008ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 5.298117ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 5.306763ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 5.226827ms)
Jun 11 11:28:44.109: INFO: (9) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.327511ms)
Jun 11 11:28:44.110: INFO: (9) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 5.885783ms)
Jun 11 11:28:44.111: INFO: (9) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.360957ms)
Jun 11 11:28:44.111: INFO: (9) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.659214ms)
Jun 11 11:28:44.111: INFO: (9) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 6.557093ms)
Jun 11 11:28:44.111: INFO: (9) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 3.732749ms)
Jun 11 11:28:44.115: INFO: (10) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 4.218168ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 4.299309ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 4.313897ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 4.898638ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 4.857011ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.877145ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.871418ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 4.924548ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.864246ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 4.864349ms)
Jun 11 11:28:44.116: INFO: (10) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 4.820321ms)
Jun 11 11:28:44.121: INFO: (11) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 4.837447ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 5.176861ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.493471ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.519945ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 5.485957ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 5.486486ms)
Jun 11 11:28:44.122: INFO: (11) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 5.742036ms)
Jun 11 11:28:44.128: INFO: (12) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 6.056409ms)
Jun 11 11:28:44.128: INFO: (12) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.328578ms)
Jun 11 11:28:44.128: INFO: (12) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 6.263229ms)
Jun 11 11:28:44.128: INFO: (12) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 6.311206ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 6.27155ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 6.277173ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 6.614175ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 6.589292ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 6.788153ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 6.8031ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 6.550904ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 6.913694ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 6.963062ms)
Jun 11 11:28:44.129: INFO: (12) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 6.940446ms)
Jun 11 11:28:44.133: INFO: (13) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 3.77565ms)
Jun 11 11:28:44.133: INFO: (13) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 4.076661ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.290609ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.357739ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test (200; 5.182836ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.144042ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.221031ms)
Jun 11 11:28:44.134: INFO: (13) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 5.361829ms)
Jun 11 11:28:44.135: INFO: (13) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.737898ms)
Jun 11 11:28:44.135: INFO: (13) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.740219ms)
Jun 11 11:28:44.135: INFO: (13) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 5.844916ms)
Jun 11 11:28:44.135: INFO: (13) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.907335ms)
Jun 11 11:28:44.138: INFO: (14) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 2.448242ms)
Jun 11 11:28:44.139: INFO: (14) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 5.15668ms)
Jun 11 11:28:44.140: INFO: (14) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.247796ms)
Jun 11 11:28:44.140: INFO: (14) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.263908ms)
Jun 11 11:28:44.140: INFO: (14) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.214334ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 5.653777ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 5.633079ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.658926ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.747307ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 5.774615ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 6.110249ms)
Jun 11 11:28:44.141: INFO: (14) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 6.203001ms)
Jun 11 11:28:44.148: INFO: (15) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 6.835243ms)
Jun 11 11:28:44.148: INFO: (15) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 6.964064ms)
Jun 11 11:28:44.148: INFO: (15) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 6.938975ms)
Jun 11 11:28:44.148: INFO: (15) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 6.950984ms)
Jun 11 11:28:44.149: INFO: (15) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 7.136788ms)
Jun 11 11:28:44.149: INFO: (15) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 7.330056ms)
Jun 11 11:28:44.149: INFO: (15) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 7.558197ms)
Jun 11 11:28:44.149: INFO: (15) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 3.404849ms)
Jun 11 11:28:44.155: INFO: (16) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 3.504793ms)
Jun 11 11:28:44.155: INFO: (16) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 3.614759ms)
Jun 11 11:28:44.155: INFO: (16) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.010064ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 3.994686ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.072023ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 4.084399ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 4.866668ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 4.881672ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname1/proxy/: tls baz (200; 5.002086ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 4.952893ms)
Jun 11 11:28:44.156: INFO: (16) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 4.943842ms)
Jun 11 11:28:44.157: INFO: (16) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.465521ms)
Jun 11 11:28:44.160: INFO: (17) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 2.578763ms)
Jun 11 11:28:44.160: INFO: (17) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 3.241045ms)
Jun 11 11:28:44.160: INFO: (17) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 3.075833ms)
Jun 11 11:28:44.161: INFO: (17) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 3.311065ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 5.382113ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 5.475176ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 5.398895ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 5.506837ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 5.764224ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname1/proxy/: foo (200; 5.860153ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 5.800816ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 5.836674ms)
Jun 11 11:28:44.163: INFO: (17) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: ... (200; 3.958794ms)
Jun 11 11:28:44.167: INFO: (18) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.09628ms)
Jun 11 11:28:44.167: INFO: (18) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: test<... (200; 4.948575ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:1080/proxy/: ... (200; 3.57248ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:460/proxy/: tls baz (200; 3.613111ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 3.602269ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:1080/proxy/: test<... (200; 3.657672ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 3.916941ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx:162/proxy/: bar (200; 4.089324ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/http:proxy-service-qq27x-bbxcx:160/proxy/: foo (200; 4.052911ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:462/proxy/: tls qux (200; 4.097978ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/pods/proxy-service-qq27x-bbxcx/proxy/: test (200; 4.106605ms)
Jun 11 11:28:44.172: INFO: (19) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname1/proxy/: foo (200; 4.161906ms)
Jun 11 11:28:44.173: INFO: (19) /api/v1/namespaces/proxy-9880/services/https:proxy-service-qq27x:tlsportname2/proxy/: tls qux (200; 4.574257ms)
Jun 11 11:28:44.173: INFO: (19) /api/v1/namespaces/proxy-9880/services/http:proxy-service-qq27x:portname2/proxy/: bar (200; 4.640019ms)
Jun 11 11:28:44.173: INFO: (19) /api/v1/namespaces/proxy-9880/services/proxy-service-qq27x:portname2/proxy/: bar (200; 4.631944ms)
Jun 11 11:28:44.173: INFO: (19) /api/v1/namespaces/proxy-9880/pods/https:proxy-service-qq27x-bbxcx:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-481
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-481
I0611 11:28:53.932723       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-481, replica count: 2
I0611 11:28:56.983207       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:28:59.983466       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 11 11:28:59.983: INFO: Creating new exec pod
Jun 11 11:29:05.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-481 execpoddgmjt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jun 11 11:29:05.293: INFO: stderr: "I0611 11:29:05.187876    2249 log.go:172] (0xc0007a4580) (0xc0007461e0) Create stream\nI0611 11:29:05.187931    2249 log.go:172] (0xc0007a4580) (0xc0007461e0) Stream added, broadcasting: 1\nI0611 11:29:05.190447    2249 log.go:172] (0xc0007a4580) Reply frame received for 1\nI0611 11:29:05.190485    2249 log.go:172] (0xc0007a4580) (0xc000645360) Create stream\nI0611 11:29:05.190493    2249 log.go:172] (0xc0007a4580) (0xc000645360) Stream added, broadcasting: 3\nI0611 11:29:05.191616    2249 log.go:172] (0xc0007a4580) Reply frame received for 3\nI0611 11:29:05.191650    2249 log.go:172] (0xc0007a4580) (0xc000aae000) Create stream\nI0611 11:29:05.191665    2249 log.go:172] (0xc0007a4580) (0xc000aae000) Stream added, broadcasting: 5\nI0611 11:29:05.192616    2249 log.go:172] (0xc0007a4580) Reply frame received for 5\nI0611 11:29:05.249482    2249 log.go:172] (0xc0007a4580) Data frame received for 5\nI0611 11:29:05.249510    2249 log.go:172] (0xc000aae000) (5) Data frame handling\nI0611 11:29:05.249530    2249 log.go:172] (0xc000aae000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0611 11:29:05.285698    2249 log.go:172] (0xc0007a4580) Data frame received for 5\nI0611 11:29:05.285721    2249 log.go:172] (0xc000aae000) (5) Data frame handling\nI0611 11:29:05.285734    2249 log.go:172] (0xc000aae000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0611 11:29:05.286174    2249 log.go:172] (0xc0007a4580) Data frame received for 5\nI0611 11:29:05.286200    2249 log.go:172] (0xc000aae000) (5) Data frame handling\nI0611 11:29:05.286222    2249 log.go:172] (0xc0007a4580) Data frame received for 3\nI0611 11:29:05.286244    2249 log.go:172] (0xc000645360) (3) Data frame handling\nI0611 11:29:05.287805    2249 log.go:172] (0xc0007a4580) Data frame received for 1\nI0611 11:29:05.287836    2249 log.go:172] (0xc0007461e0) (1) Data frame handling\nI0611 11:29:05.287855    2249 log.go:172] (0xc0007461e0) (1) Data frame sent\nI0611 11:29:05.287878    2249 log.go:172] (0xc0007a4580) (0xc0007461e0) Stream removed, broadcasting: 1\nI0611 11:29:05.287895    2249 log.go:172] (0xc0007a4580) Go away received\nI0611 11:29:05.288155    2249 log.go:172] (0xc0007a4580) (0xc0007461e0) Stream removed, broadcasting: 1\nI0611 11:29:05.288167    2249 log.go:172] (0xc0007a4580) (0xc000645360) Stream removed, broadcasting: 3\nI0611 11:29:05.288172    2249 log.go:172] (0xc0007a4580) (0xc000aae000) Stream removed, broadcasting: 5\n"
Jun 11 11:29:05.293: INFO: stdout: ""
Jun 11 11:29:05.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-481 execpoddgmjt -- /bin/sh -x -c nc -zv -t -w 2 10.101.139.16 80'
Jun 11 11:29:05.497: INFO: stderr: "I0611 11:29:05.426018    2270 log.go:172] (0xc000a21a20) (0xc000960640) Create stream\nI0611 11:29:05.426085    2270 log.go:172] (0xc000a21a20) (0xc000960640) Stream added, broadcasting: 1\nI0611 11:29:05.430650    2270 log.go:172] (0xc000a21a20) Reply frame received for 1\nI0611 11:29:05.430697    2270 log.go:172] (0xc000a21a20) (0xc0005d95e0) Create stream\nI0611 11:29:05.430712    2270 log.go:172] (0xc000a21a20) (0xc0005d95e0) Stream added, broadcasting: 3\nI0611 11:29:05.431701    2270 log.go:172] (0xc000a21a20) Reply frame received for 3\nI0611 11:29:05.431771    2270 log.go:172] (0xc000a21a20) (0xc00042aa00) Create stream\nI0611 11:29:05.431789    2270 log.go:172] (0xc000a21a20) (0xc00042aa00) Stream added, broadcasting: 5\nI0611 11:29:05.432701    2270 log.go:172] (0xc000a21a20) Reply frame received for 5\nI0611 11:29:05.488922    2270 log.go:172] (0xc000a21a20) Data frame received for 5\nI0611 11:29:05.488982    2270 log.go:172] (0xc00042aa00) (5) Data frame handling\nI0611 11:29:05.489003    2270 log.go:172] (0xc00042aa00) (5) Data frame sent\nI0611 11:29:05.489015    2270 log.go:172] (0xc000a21a20) Data frame received for 5\nI0611 11:29:05.489027    2270 log.go:172] (0xc00042aa00) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.139.16 80\nConnection to 10.101.139.16 80 port [tcp/http] succeeded!\nI0611 11:29:05.489097    2270 log.go:172] (0xc000a21a20) Data frame received for 3\nI0611 11:29:05.489347    2270 log.go:172] (0xc0005d95e0) (3) Data frame handling\nI0611 11:29:05.490713    2270 log.go:172] (0xc000a21a20) Data frame received for 1\nI0611 11:29:05.490725    2270 log.go:172] (0xc000960640) (1) Data frame handling\nI0611 11:29:05.490739    2270 log.go:172] (0xc000960640) (1) Data frame sent\nI0611 11:29:05.490749    2270 log.go:172] (0xc000a21a20) (0xc000960640) Stream removed, broadcasting: 1\nI0611 11:29:05.490757    2270 log.go:172] (0xc000a21a20) Go away received\nI0611 11:29:05.491072    2270 log.go:172] (0xc000a21a20) (0xc000960640) Stream removed, broadcasting: 1\nI0611 11:29:05.491088    2270 log.go:172] (0xc000a21a20) (0xc0005d95e0) Stream removed, broadcasting: 3\nI0611 11:29:05.491094    2270 log.go:172] (0xc000a21a20) (0xc00042aa00) Stream removed, broadcasting: 5\n"
Jun 11 11:29:05.497: INFO: stdout: ""
Jun 11 11:29:05.497: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:05.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-481" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.796 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":110,"skipped":1792,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:05.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:29:06.490: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:29:08.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:29:10.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727471746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:29:13.530: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:29:13.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:14.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5551" for this suite.
STEP: Destroying namespace "webhook-5551-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.330 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":111,"skipped":1850,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:14.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Jun 11 11:29:14.989: INFO: Waiting up to 5m0s for pod "var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536" in namespace "var-expansion-262" to be "Succeeded or Failed"
Jun 11 11:29:14.993: INFO: Pod "var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796677ms
Jun 11 11:29:17.056: INFO: Pod "var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066762738s
Jun 11 11:29:19.060: INFO: Pod "var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071066498s
STEP: Saw pod success
Jun 11 11:29:19.060: INFO: Pod "var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536" satisfied condition "Succeeded or Failed"
Jun 11 11:29:19.063: INFO: Trying to get logs from node kali-worker2 pod var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536 container dapi-container: 
STEP: delete the pod
Jun 11 11:29:19.208: INFO: Waiting for pod var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536 to disappear
Jun 11 11:29:19.227: INFO: Pod var-expansion-3c89a0b3-3e18-4131-8ac8-2657d56a9536 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:19.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-262" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1853,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:19.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 11 11:29:19.305: INFO: Waiting up to 5m0s for pod "pod-23d75a12-0fc8-4d6d-a363-e46fdea42944" in namespace "emptydir-1857" to be "Succeeded or Failed"
Jun 11 11:29:19.350: INFO: Pod "pod-23d75a12-0fc8-4d6d-a363-e46fdea42944": Phase="Pending", Reason="", readiness=false. Elapsed: 45.323286ms
Jun 11 11:29:21.354: INFO: Pod "pod-23d75a12-0fc8-4d6d-a363-e46fdea42944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049640561s
Jun 11 11:29:23.359: INFO: Pod "pod-23d75a12-0fc8-4d6d-a363-e46fdea42944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054267494s
STEP: Saw pod success
Jun 11 11:29:23.359: INFO: Pod "pod-23d75a12-0fc8-4d6d-a363-e46fdea42944" satisfied condition "Succeeded or Failed"
Jun 11 11:29:23.362: INFO: Trying to get logs from node kali-worker pod pod-23d75a12-0fc8-4d6d-a363-e46fdea42944 container test-container: 
STEP: delete the pod
Jun 11 11:29:23.402: INFO: Waiting for pod pod-23d75a12-0fc8-4d6d-a363-e46fdea42944 to disappear
Jun 11 11:29:23.406: INFO: Pod pod-23d75a12-0fc8-4d6d-a363-e46fdea42944 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:23.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1857" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1863,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:23.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-3faea98a-aea6-4bac-aa6f-0591794f2d34
STEP: Creating a pod to test consume secrets
Jun 11 11:29:23.834: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642" in namespace "projected-5550" to be "Succeeded or Failed"
Jun 11 11:29:23.856: INFO: Pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642": Phase="Pending", Reason="", readiness=false. Elapsed: 22.739675ms
Jun 11 11:29:27.299: INFO: Pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465412656s
Jun 11 11:29:29.306: INFO: Pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642": Phase="Running", Reason="", readiness=true. Elapsed: 5.47276588s
Jun 11 11:29:31.311: INFO: Pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.477123807s
STEP: Saw pod success
Jun 11 11:29:31.311: INFO: Pod "pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642" satisfied condition "Succeeded or Failed"
Jun 11 11:29:31.314: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:29:31.384: INFO: Waiting for pod pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642 to disappear
Jun 11 11:29:31.395: INFO: Pod pod-projected-secrets-85681fac-714d-4d74-8245-fc3e1345a642 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:31.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5550" for this suite.

• [SLOW TEST:7.992 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1865,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:31.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0611 11:29:41.583902       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 11:29:41.583: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:29:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9637" for this suite.

• [SLOW TEST:10.187 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":115,"skipped":1878,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:29:41.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e in namespace container-probe-8578
Jun 11 11:29:45.734: INFO: Started pod liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e in namespace container-probe-8578
STEP: checking the pod's current state and verifying that restartCount is present
Jun 11 11:29:45.738: INFO: Initial restart count of pod liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is 0
Jun 11 11:30:05.863: INFO: Restart count of pod container-probe-8578/liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is now 1 (20.125193913s elapsed)
Jun 11 11:30:27.313: INFO: Restart count of pod container-probe-8578/liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is now 2 (41.574785744s elapsed)
Jun 11 11:30:47.356: INFO: Restart count of pod container-probe-8578/liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is now 3 (1m1.618606993s elapsed)
Jun 11 11:31:05.395: INFO: Restart count of pod container-probe-8578/liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is now 4 (1m19.657629771s elapsed)
Jun 11 11:32:13.627: INFO: Restart count of pod container-probe-8578/liveness-b8ef18bd-6ca2-40f3-8597-57a90e07110e is now 5 (2m27.888934414s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:32:13.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8578" for this suite.

• [SLOW TEST:152.061 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1880,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:32:13.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-9893
STEP: creating replication controller nodeport-test in namespace services-9893
I0611 11:32:14.142006       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9893, replica count: 2
I0611 11:32:17.192480       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 11:32:20.192755       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 11 11:32:20.192: INFO: Creating new exec pod
Jun 11 11:32:25.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9893 execpodz26vn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jun 11 11:32:29.059: INFO: stderr: "I0611 11:32:28.952779    2289 log.go:172] (0xc00003a790) (0xc0006f17c0) Create stream\nI0611 11:32:28.952826    2289 log.go:172] (0xc00003a790) (0xc0006f17c0) Stream added, broadcasting: 1\nI0611 11:32:28.955446    2289 log.go:172] (0xc00003a790) Reply frame received for 1\nI0611 11:32:28.955499    2289 log.go:172] (0xc00003a790) (0xc00063f5e0) Create stream\nI0611 11:32:28.955516    2289 log.go:172] (0xc00003a790) (0xc00063f5e0) Stream added, broadcasting: 3\nI0611 11:32:28.956595    2289 log.go:172] (0xc00003a790) Reply frame received for 3\nI0611 11:32:28.956636    2289 log.go:172] (0xc00003a790) (0xc000532a00) Create stream\nI0611 11:32:28.956654    2289 log.go:172] (0xc00003a790) (0xc000532a00) Stream added, broadcasting: 5\nI0611 11:32:28.957848    2289 log.go:172] (0xc00003a790) Reply frame received for 5\nI0611 11:32:29.039415    2289 log.go:172] (0xc00003a790) Data frame received for 5\nI0611 11:32:29.039467    2289 log.go:172] (0xc000532a00) (5) Data frame handling\nI0611 11:32:29.039493    2289 log.go:172] (0xc000532a00) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0611 11:32:29.048671    2289 log.go:172] (0xc00003a790) Data frame received for 5\nI0611 11:32:29.048694    2289 log.go:172] (0xc000532a00) (5) Data frame handling\nI0611 11:32:29.048712    2289 log.go:172] (0xc000532a00) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0611 11:32:29.049036    2289 log.go:172] (0xc00003a790) Data frame received for 5\nI0611 11:32:29.049076    2289 log.go:172] (0xc000532a00) (5) Data frame handling\nI0611 11:32:29.049693    2289 log.go:172] (0xc00003a790) Data frame received for 3\nI0611 11:32:29.049726    2289 log.go:172] (0xc00063f5e0) (3) Data frame handling\nI0611 11:32:29.051220    2289 log.go:172] (0xc00003a790) Data frame received for 1\nI0611 11:32:29.051243    2289 log.go:172] (0xc0006f17c0) (1) Data frame handling\nI0611 11:32:29.051256    2289 log.go:172] (0xc0006f17c0) (1) Data frame sent\nI0611 11:32:29.051275    2289 log.go:172] (0xc00003a790) (0xc0006f17c0) Stream removed, broadcasting: 1\nI0611 11:32:29.051325    2289 log.go:172] (0xc00003a790) Go away received\nI0611 11:32:29.051569    2289 log.go:172] (0xc00003a790) (0xc0006f17c0) Stream removed, broadcasting: 1\nI0611 11:32:29.051589    2289 log.go:172] (0xc00003a790) (0xc00063f5e0) Stream removed, broadcasting: 3\nI0611 11:32:29.051595    2289 log.go:172] (0xc00003a790) (0xc000532a00) Stream removed, broadcasting: 5\n"
Jun 11 11:32:29.059: INFO: stdout: ""
Jun 11 11:32:29.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9893 execpodz26vn -- /bin/sh -x -c nc -zv -t -w 2 10.102.120.74 80'
Jun 11 11:32:29.291: INFO: stderr: "I0611 11:32:29.205005    2321 log.go:172] (0xc000ad0790) (0xc000a600a0) Create stream\nI0611 11:32:29.205068    2321 log.go:172] (0xc000ad0790) (0xc000a600a0) Stream added, broadcasting: 1\nI0611 11:32:29.215135    2321 log.go:172] (0xc000ad0790) Reply frame received for 1\nI0611 11:32:29.215198    2321 log.go:172] (0xc000ad0790) (0xc0006f52c0) Create stream\nI0611 11:32:29.215217    2321 log.go:172] (0xc000ad0790) (0xc0006f52c0) Stream added, broadcasting: 3\nI0611 11:32:29.217476    2321 log.go:172] (0xc000ad0790) Reply frame received for 3\nI0611 11:32:29.217566    2321 log.go:172] (0xc000ad0790) (0xc000a601e0) Create stream\nI0611 11:32:29.217645    2321 log.go:172] (0xc000ad0790) (0xc000a601e0) Stream added, broadcasting: 5\nI0611 11:32:29.218651    2321 log.go:172] (0xc000ad0790) Reply frame received for 5\nI0611 11:32:29.284904    2321 log.go:172] (0xc000ad0790) Data frame received for 3\nI0611 11:32:29.284930    2321 log.go:172] (0xc0006f52c0) (3) Data frame handling\nI0611 11:32:29.284970    2321 log.go:172] (0xc000ad0790) Data frame received for 5\nI0611 11:32:29.284995    2321 log.go:172] (0xc000a601e0) (5) Data frame handling\nI0611 11:32:29.285015    2321 log.go:172] (0xc000a601e0) (5) Data frame sent\nI0611 11:32:29.285026    2321 log.go:172] (0xc000ad0790) Data frame received for 5\nI0611 11:32:29.285033    2321 log.go:172] (0xc000a601e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.120.74 80\nConnection to 10.102.120.74 80 port [tcp/http] succeeded!\nI0611 11:32:29.286455    2321 log.go:172] (0xc000ad0790) Data frame received for 1\nI0611 11:32:29.286475    2321 log.go:172] (0xc000a600a0) (1) Data frame handling\nI0611 11:32:29.286487    2321 log.go:172] (0xc000a600a0) (1) Data frame sent\nI0611 11:32:29.286501    2321 log.go:172] (0xc000ad0790) (0xc000a600a0) Stream removed, broadcasting: 1\nI0611 11:32:29.286518    2321 log.go:172] (0xc000ad0790) Go away received\nI0611 11:32:29.286963    2321 log.go:172] (0xc000ad0790) (0xc000a600a0) Stream removed, broadcasting: 1\nI0611 11:32:29.286983    2321 log.go:172] (0xc000ad0790) (0xc0006f52c0) Stream removed, broadcasting: 3\nI0611 11:32:29.286992    2321 log.go:172] (0xc000ad0790) (0xc000a601e0) Stream removed, broadcasting: 5\n"
Jun 11 11:32:29.292: INFO: stdout: ""
Jun 11 11:32:29.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9893 execpodz26vn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30327'
Jun 11 11:32:29.483: INFO: stderr: "I0611 11:32:29.420581    2342 log.go:172] (0xc0009fa6e0) (0xc0009d40a0) Create stream\nI0611 11:32:29.420634    2342 log.go:172] (0xc0009fa6e0) (0xc0009d40a0) Stream added, broadcasting: 1\nI0611 11:32:29.423480    2342 log.go:172] (0xc0009fa6e0) Reply frame received for 1\nI0611 11:32:29.423533    2342 log.go:172] (0xc0009fa6e0) (0xc0009b6000) Create stream\nI0611 11:32:29.423561    2342 log.go:172] (0xc0009fa6e0) (0xc0009b6000) Stream added, broadcasting: 3\nI0611 11:32:29.424487    2342 log.go:172] (0xc0009fa6e0) Reply frame received for 3\nI0611 11:32:29.424534    2342 log.go:172] (0xc0009fa6e0) (0xc0006032c0) Create stream\nI0611 11:32:29.424555    2342 log.go:172] (0xc0009fa6e0) (0xc0006032c0) Stream added, broadcasting: 5\nI0611 11:32:29.425698    2342 log.go:172] (0xc0009fa6e0) Reply frame received for 5\nI0611 11:32:29.476041    2342 log.go:172] (0xc0009fa6e0) Data frame received for 5\nI0611 11:32:29.476073    2342 log.go:172] (0xc0006032c0) (5) Data frame handling\nI0611 11:32:29.476095    2342 log.go:172] (0xc0006032c0) (5) Data frame sent\nI0611 11:32:29.476104    2342 log.go:172] (0xc0009fa6e0) Data frame received for 5\nI0611 11:32:29.476110    2342 log.go:172] (0xc0006032c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30327\nConnection to 172.17.0.15 30327 port [tcp/30327] succeeded!\nI0611 11:32:29.476154    2342 log.go:172] (0xc0009fa6e0) Data frame received for 3\nI0611 11:32:29.476184    2342 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0611 11:32:29.477721    2342 log.go:172] (0xc0009fa6e0) Data frame received for 1\nI0611 11:32:29.477742    2342 log.go:172] (0xc0009d40a0) (1) Data frame handling\nI0611 11:32:29.477751    2342 log.go:172] (0xc0009d40a0) (1) Data frame sent\nI0611 11:32:29.477762    2342 log.go:172] (0xc0009fa6e0) (0xc0009d40a0) Stream removed, broadcasting: 1\nI0611 11:32:29.477774    2342 log.go:172] (0xc0009fa6e0) Go away received\nI0611 11:32:29.478088    2342 log.go:172] (0xc0009fa6e0) (0xc0009d40a0) Stream removed, broadcasting: 1\nI0611 11:32:29.478114    2342 log.go:172] (0xc0009fa6e0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0611 11:32:29.478122    2342 log.go:172] (0xc0009fa6e0) (0xc0006032c0) Stream removed, broadcasting: 5\n"
Jun 11 11:32:29.483: INFO: stdout: ""
Jun 11 11:32:29.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9893 execpodz26vn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30327'
Jun 11 11:32:29.696: INFO: stderr: "I0611 11:32:29.610397    2364 log.go:172] (0xc00003b340) (0xc0005101e0) Create stream\nI0611 11:32:29.610457    2364 log.go:172] (0xc00003b340) (0xc0005101e0) Stream added, broadcasting: 1\nI0611 11:32:29.612908    2364 log.go:172] (0xc00003b340) Reply frame received for 1\nI0611 11:32:29.612959    2364 log.go:172] (0xc00003b340) (0xc000693180) Create stream\nI0611 11:32:29.612973    2364 log.go:172] (0xc00003b340) (0xc000693180) Stream added, broadcasting: 3\nI0611 11:32:29.614214    2364 log.go:172] (0xc00003b340) Reply frame received for 3\nI0611 11:32:29.614252    2364 log.go:172] (0xc00003b340) (0xc0002b0000) Create stream\nI0611 11:32:29.614263    2364 log.go:172] (0xc00003b340) (0xc0002b0000) Stream added, broadcasting: 5\nI0611 11:32:29.615062    2364 log.go:172] (0xc00003b340) Reply frame received for 5\nI0611 11:32:29.687338    2364 log.go:172] (0xc00003b340) Data frame received for 3\nI0611 11:32:29.687364    2364 log.go:172] (0xc000693180) (3) Data frame handling\nI0611 11:32:29.687615    2364 log.go:172] (0xc00003b340) Data frame received for 5\nI0611 11:32:29.687651    2364 log.go:172] (0xc0002b0000) (5) Data frame handling\nI0611 11:32:29.687672    2364 log.go:172] (0xc0002b0000) (5) Data frame sent\nI0611 11:32:29.687683    2364 log.go:172] (0xc00003b340) Data frame received for 5\nI0611 11:32:29.687694    2364 log.go:172] (0xc0002b0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30327\nConnection to 172.17.0.18 30327 port [tcp/30327] succeeded!\nI0611 11:32:29.689651    2364 log.go:172] (0xc00003b340) Data frame received for 1\nI0611 11:32:29.689689    2364 log.go:172] (0xc0005101e0) (1) Data frame handling\nI0611 11:32:29.689722    2364 log.go:172] (0xc0005101e0) (1) Data frame sent\nI0611 11:32:29.689745    2364 log.go:172] (0xc00003b340) (0xc0005101e0) Stream removed, broadcasting: 1\nI0611 11:32:29.689813    2364 log.go:172] (0xc00003b340) Go away received\nI0611 11:32:29.690290    2364 log.go:172] (0xc00003b340) (0xc0005101e0) Stream removed, broadcasting: 1\nI0611 11:32:29.690333    2364 log.go:172] (0xc00003b340) (0xc000693180) Stream removed, broadcasting: 3\nI0611 11:32:29.690358    2364 log.go:172] (0xc00003b340) (0xc0002b0000) Stream removed, broadcasting: 5\n"
Jun 11 11:32:29.697: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:32:29.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9893" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.051 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":117,"skipped":1914,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:32:29.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:32:29.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jun 11 11:32:30.350: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:30Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:32:30Z]] name:name1 resourceVersion:11516320 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d0ad5af-93ef-4d32-9be9-059dcf135990] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jun 11 11:32:40.357: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:40Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:32:40Z]] name:name2 resourceVersion:11516384 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:96e04097-6bff-407a-b51b-13c590013667] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jun 11 11:32:50.375: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:32:50Z]] name:name1 resourceVersion:11516414 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d0ad5af-93ef-4d32-9be9-059dcf135990] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jun 11 11:33:00.382: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:33:00Z]] name:name2 resourceVersion:11516440 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:96e04097-6bff-407a-b51b-13c590013667] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jun 11 11:33:10.392: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:32:50Z]] name:name1 resourceVersion:11516470 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d0ad5af-93ef-4d32-9be9-059dcf135990] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jun 11 11:33:20.402: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-11T11:32:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-11T11:33:00Z]] name:name2 resourceVersion:11516500 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:96e04097-6bff-407a-b51b-13c590013667] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:33:30.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5153" for this suite.

• [SLOW TEST:61.215 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":118,"skipped":1930,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:33:30.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:33:30.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:33:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4387" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1940,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:33:35.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:33:54.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2990" for this suite.

• [SLOW TEST:18.267 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":120,"skipped":1940,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:33:54.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Jun 11 11:33:54.181: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jun 11 11:33:54.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:54.518: INFO: stderr: ""
Jun 11 11:33:54.518: INFO: stdout: "service/agnhost-slave created\n"
Jun 11 11:33:54.518: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jun 11 11:33:54.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:54.809: INFO: stderr: ""
Jun 11 11:33:54.809: INFO: stdout: "service/agnhost-master created\n"
Jun 11 11:33:54.809: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jun 11 11:33:54.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:55.167: INFO: stderr: ""
Jun 11 11:33:55.167: INFO: stdout: "service/frontend created\n"
Jun 11 11:33:55.168: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jun 11 11:33:55.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:55.470: INFO: stderr: ""
Jun 11 11:33:55.470: INFO: stdout: "deployment.apps/frontend created\n"
Jun 11 11:33:55.470: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jun 11 11:33:55.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:55.793: INFO: stderr: ""
Jun 11 11:33:55.793: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jun 11 11:33:55.793: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jun 11 11:33:55.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1428'
Jun 11 11:33:56.082: INFO: stderr: ""
Jun 11 11:33:56.082: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jun 11 11:33:56.082: INFO: Waiting for all frontend pods to be Running.
Jun 11 11:34:11.133: INFO: Waiting for frontend to serve content.
Jun 11 11:34:11.143: INFO: Trying to add a new entry to the guestbook.
Jun 11 11:34:11.154: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jun 11 11:34:11.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:11.319: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:11.320: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jun 11 11:34:11.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:11.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:11.871: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jun 11 11:34:11.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:12.178: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:12.178: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jun 11 11:34:12.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:12.389: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:12.389: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jun 11 11:34:12.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:13.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:13.100: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jun 11 11:34:13.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1428'
Jun 11 11:34:13.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:34:13.569: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:34:13.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1428" for this suite.

• [SLOW TEST:19.760 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":121,"skipped":1986,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:34:13.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:34:15.233: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:34:16.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6204" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":122,"skipped":1992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:34:17.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:34:18.246: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:34:20.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:34:22.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472058, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:34:25.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:34:25.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1334" for this suite.
STEP: Destroying namespace "webhook-1334-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.756 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":123,"skipped":2027,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:34:25.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jun 11 11:34:26.059: INFO: Created pod &Pod{ObjectMeta:{dns-7059  dns-7059 /api/v1/namespaces/dns-7059/pods/dns-7059 f06fd7ce-f4b1-4a80-977f-b790d2370199 11517015 0 2020-06-11 11:34:26 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-06-11 11:34:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cm8kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cm8kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cm8kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:34:26.366: INFO: The status of Pod dns-7059 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:34:28.370: INFO: The status of Pod dns-7059 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:34:30.370: INFO: The status of Pod dns-7059 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jun 11 11:34:30.370: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7059 PodName:dns-7059 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:34:30.371: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:34:30.412083       7 log.go:172] (0xc00251b970) (0xc000eb30e0) Create stream
I0611 11:34:30.412113       7 log.go:172] (0xc00251b970) (0xc000eb30e0) Stream added, broadcasting: 1
I0611 11:34:30.414472       7 log.go:172] (0xc00251b970) Reply frame received for 1
I0611 11:34:30.414513       7 log.go:172] (0xc00251b970) (0xc000c6a500) Create stream
I0611 11:34:30.414538       7 log.go:172] (0xc00251b970) (0xc000c6a500) Stream added, broadcasting: 3
I0611 11:34:30.415498       7 log.go:172] (0xc00251b970) Reply frame received for 3
I0611 11:34:30.415562       7 log.go:172] (0xc00251b970) (0xc000eb3360) Create stream
I0611 11:34:30.415578       7 log.go:172] (0xc00251b970) (0xc000eb3360) Stream added, broadcasting: 5
I0611 11:34:30.416607       7 log.go:172] (0xc00251b970) Reply frame received for 5
I0611 11:34:30.521643       7 log.go:172] (0xc00251b970) Data frame received for 3
I0611 11:34:30.521691       7 log.go:172] (0xc000c6a500) (3) Data frame handling
I0611 11:34:30.521724       7 log.go:172] (0xc000c6a500) (3) Data frame sent
I0611 11:34:30.523036       7 log.go:172] (0xc00251b970) Data frame received for 5
I0611 11:34:30.523099       7 log.go:172] (0xc000eb3360) (5) Data frame handling
I0611 11:34:30.523284       7 log.go:172] (0xc00251b970) Data frame received for 3
I0611 11:34:30.523321       7 log.go:172] (0xc000c6a500) (3) Data frame handling
I0611 11:34:30.525450       7 log.go:172] (0xc00251b970) Data frame received for 1
I0611 11:34:30.525500       7 log.go:172] (0xc000eb30e0) (1) Data frame handling
I0611 11:34:30.525544       7 log.go:172] (0xc000eb30e0) (1) Data frame sent
I0611 11:34:30.525569       7 log.go:172] (0xc00251b970) (0xc000eb30e0) Stream removed, broadcasting: 1
I0611 11:34:30.525603       7 log.go:172] (0xc00251b970) Go away received
I0611 11:34:30.525715       7 log.go:172] (0xc00251b970) (0xc000eb30e0) Stream removed, broadcasting: 1
I0611 11:34:30.525741       7 log.go:172] (0xc00251b970) (0xc000c6a500) Stream removed, broadcasting: 3
I0611 11:34:30.525757       7 log.go:172] (0xc00251b970) (0xc000eb3360) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jun 11 11:34:30.525: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7059 PodName:dns-7059 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:34:30.525: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:34:30.561319       7 log.go:172] (0xc0031fe000) (0xc000eb37c0) Create stream
I0611 11:34:30.561339       7 log.go:172] (0xc0031fe000) (0xc000eb37c0) Stream added, broadcasting: 1
I0611 11:34:30.563262       7 log.go:172] (0xc0031fe000) Reply frame received for 1
I0611 11:34:30.563304       7 log.go:172] (0xc0031fe000) (0xc0023c0140) Create stream
I0611 11:34:30.563320       7 log.go:172] (0xc0031fe000) (0xc0023c0140) Stream added, broadcasting: 3
I0611 11:34:30.564408       7 log.go:172] (0xc0031fe000) Reply frame received for 3
I0611 11:34:30.564470       7 log.go:172] (0xc0031fe000) (0xc000c6a640) Create stream
I0611 11:34:30.564485       7 log.go:172] (0xc0031fe000) (0xc000c6a640) Stream added, broadcasting: 5
I0611 11:34:30.565772       7 log.go:172] (0xc0031fe000) Reply frame received for 5
I0611 11:34:30.628462       7 log.go:172] (0xc0031fe000) Data frame received for 3
I0611 11:34:30.628498       7 log.go:172] (0xc0023c0140) (3) Data frame handling
I0611 11:34:30.628521       7 log.go:172] (0xc0023c0140) (3) Data frame sent
I0611 11:34:30.630149       7 log.go:172] (0xc0031fe000) Data frame received for 5
I0611 11:34:30.630188       7 log.go:172] (0xc000c6a640) (5) Data frame handling
I0611 11:34:30.630451       7 log.go:172] (0xc0031fe000) Data frame received for 3
I0611 11:34:30.630516       7 log.go:172] (0xc0023c0140) (3) Data frame handling
I0611 11:34:30.632171       7 log.go:172] (0xc0031fe000) Data frame received for 1
I0611 11:34:30.632197       7 log.go:172] (0xc000eb37c0) (1) Data frame handling
I0611 11:34:30.632229       7 log.go:172] (0xc000eb37c0) (1) Data frame sent
I0611 11:34:30.632243       7 log.go:172] (0xc0031fe000) (0xc000eb37c0) Stream removed, broadcasting: 1
I0611 11:34:30.632258       7 log.go:172] (0xc0031fe000) Go away received
I0611 11:34:30.632366       7 log.go:172] (0xc0031fe000) (0xc000eb37c0) Stream removed, broadcasting: 1
I0611 11:34:30.632398       7 log.go:172] (0xc0031fe000) (0xc0023c0140) Stream removed, broadcasting: 3
I0611 11:34:30.632421       7 log.go:172] (0xc0031fe000) (0xc000c6a640) Stream removed, broadcasting: 5
Jun 11 11:34:30.632: INFO: Deleting pod dns-7059...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:34:30.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7059" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":124,"skipped":2030,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:34:30.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-fc708899-04f6-4abf-bafe-ed4946d8b990 in namespace container-probe-7639
Jun 11 11:34:37.301: INFO: Started pod test-webserver-fc708899-04f6-4abf-bafe-ed4946d8b990 in namespace container-probe-7639
STEP: checking the pod's current state and verifying that restartCount is present
Jun 11 11:34:37.304: INFO: Initial restart count of pod test-webserver-fc708899-04f6-4abf-bafe-ed4946d8b990 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:38:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7639" for this suite.

• [SLOW TEST:247.438 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2062,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:38:38.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0611 11:38:39.537723       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 11:38:39.537: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:38:39.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-977" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":126,"skipped":2092,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:38:39.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:38:40.808: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:38:42.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472321, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:38:44.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472321, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472320, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:38:47.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:38:47.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2182-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:38:49.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7722" for this suite.
STEP: Destroying namespace "webhook-7722-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.752 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":127,"skipped":2126,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:38:49.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun 11 11:38:49.442: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:49.780: INFO: Number of nodes with available pods: 0
Jun 11 11:38:49.780: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:38:50.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:50.791: INFO: Number of nodes with available pods: 0
Jun 11 11:38:50.791: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:38:51.786: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:51.791: INFO: Number of nodes with available pods: 0
Jun 11 11:38:51.791: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:38:52.786: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:52.790: INFO: Number of nodes with available pods: 0
Jun 11 11:38:52.790: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:38:53.785: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:53.788: INFO: Number of nodes with available pods: 0
Jun 11 11:38:53.788: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:38:54.786: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:54.790: INFO: Number of nodes with available pods: 2
Jun 11 11:38:54.790: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jun 11 11:38:54.829: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:38:54.868: INFO: Number of nodes with available pods: 2
Jun 11 11:38:54.868: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4488, will wait for the garbage collector to delete the pods
Jun 11 11:38:55.961: INFO: Deleting DaemonSet.extensions daemon-set took: 6.202891ms
Jun 11 11:38:56.061: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.446273ms
Jun 11 11:39:03.765: INFO: Number of nodes with available pods: 0
Jun 11 11:39:03.765: INFO: Number of running nodes: 0, number of available pods: 0
Jun 11 11:39:03.768: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4488/daemonsets","resourceVersion":"11518044"},"items":null}

Jun 11 11:39:03.770: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4488/pods","resourceVersion":"11518044"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:03.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4488" for this suite.

• [SLOW TEST:14.516 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":128,"skipped":2132,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:03.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:07.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-155" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2133,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:07.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:08.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1076" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":130,"skipped":2154,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:08.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jun 11 11:39:16.169: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 11 11:39:16.179: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 11 11:39:18.179: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 11 11:39:18.184: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 11 11:39:20.179: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 11 11:39:20.184: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 11 11:39:22.179: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 11 11:39:22.198: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 11 11:39:24.179: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 11 11:39:24.185: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:24.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5391" for this suite.

• [SLOW TEST:16.129 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2154,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:24.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:39:24.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:39:26.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:39:28.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472365, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472364, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:39:31.992: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:32.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8814" for this suite.
STEP: Destroying namespace "webhook-8814-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.516 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":132,"skipped":2170,"failed":0}
SS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:32.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:32.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8670" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":133,"skipped":2172,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:32.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jun 11 11:39:32.980: INFO: Waiting up to 5m0s for pod "downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c" in namespace "downward-api-419" to be "Succeeded or Failed"
Jun 11 11:39:33.021: INFO: Pod "downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.548283ms
Jun 11 11:39:35.024: INFO: Pod "downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044397415s
Jun 11 11:39:37.028: INFO: Pod "downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048074254s
STEP: Saw pod success
Jun 11 11:39:37.028: INFO: Pod "downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c" satisfied condition "Succeeded or Failed"
Jun 11 11:39:37.031: INFO: Trying to get logs from node kali-worker2 pod downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c container dapi-container: 
STEP: delete the pod
Jun 11 11:39:37.065: INFO: Waiting for pod downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c to disappear
Jun 11 11:39:37.079: INFO: Pod downward-api-8694e2d8-ddef-4d69-b4b7-50a1fbcef11c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:37.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-419" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2172,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:37.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Jun 11 11:39:37.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5585'
Jun 11 11:39:37.539: INFO: stderr: ""
Jun 11 11:39:37.539: INFO: stdout: "pod/pause created\n"
Jun 11 11:39:37.539: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jun 11 11:39:37.539: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5585" to be "running and ready"
Jun 11 11:39:37.625: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 85.895658ms
Jun 11 11:39:39.673: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133913796s
Jun 11 11:39:41.682: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.142750297s
Jun 11 11:39:41.682: INFO: Pod "pause" satisfied condition "running and ready"
Jun 11 11:39:41.682: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Jun 11 11:39:41.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5585'
Jun 11 11:39:41.786: INFO: stderr: ""
Jun 11 11:39:41.786: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jun 11 11:39:41.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5585'
Jun 11 11:39:41.887: INFO: stderr: ""
Jun 11 11:39:41.887: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jun 11 11:39:41.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5585'
Jun 11 11:39:41.990: INFO: stderr: ""
Jun 11 11:39:41.990: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jun 11 11:39:41.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5585'
Jun 11 11:39:42.091: INFO: stderr: ""
Jun 11 11:39:42.091: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Jun 11 11:39:42.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5585'
Jun 11 11:39:42.246: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 11 11:39:42.246: INFO: stdout: "pod \"pause\" force deleted\n"
Jun 11 11:39:42.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5585'
Jun 11 11:39:42.523: INFO: stderr: "No resources found in kubectl-5585 namespace.\n"
Jun 11 11:39:42.523: INFO: stdout: ""
Jun 11 11:39:42.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5585 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 11 11:39:42.615: INFO: stderr: ""
Jun 11 11:39:42.615: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:42.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5585" for this suite.

• [SLOW TEST:5.573 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":135,"skipped":2206,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:42.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jun 11 11:39:42.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:39:58.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2032" for this suite.

• [SLOW TEST:16.177 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":136,"skipped":2220,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:39:58.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-30503f97-02f7-48d5-becc-b5148803c848
STEP: Creating a pod to test consume configMaps
Jun 11 11:39:58.968: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f" in namespace "projected-5113" to be "Succeeded or Failed"
Jun 11 11:39:58.972: INFO: Pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23434ms
Jun 11 11:40:00.988: INFO: Pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019924997s
Jun 11 11:40:02.992: INFO: Pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023938225s
Jun 11 11:40:04.996: INFO: Pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028517073s
STEP: Saw pod success
Jun 11 11:40:04.996: INFO: Pod "pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f" satisfied condition "Succeeded or Failed"
Jun 11 11:40:05.000: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f container projected-configmap-volume-test: 
STEP: delete the pod
Jun 11 11:40:05.060: INFO: Waiting for pod pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f to disappear
Jun 11 11:40:05.067: INFO: Pod pod-projected-configmaps-84be2ac8-05e3-4cfb-b840-508e622cfd4f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:40:05.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5113" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2230,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:40:05.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun 11 11:40:05.212: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:05.216: INFO: Number of nodes with available pods: 0
Jun 11 11:40:05.217: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:06.228: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:06.231: INFO: Number of nodes with available pods: 0
Jun 11 11:40:06.231: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:07.566: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:07.589: INFO: Number of nodes with available pods: 0
Jun 11 11:40:07.589: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:08.325: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:08.405: INFO: Number of nodes with available pods: 0
Jun 11 11:40:08.405: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:09.220: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:09.223: INFO: Number of nodes with available pods: 0
Jun 11 11:40:09.223: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:10.220: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:10.223: INFO: Number of nodes with available pods: 2
Jun 11 11:40:10.223: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jun 11 11:40:10.319: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:10.322: INFO: Number of nodes with available pods: 1
Jun 11 11:40:10.322: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:11.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:11.331: INFO: Number of nodes with available pods: 1
Jun 11 11:40:11.331: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:12.327: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:12.331: INFO: Number of nodes with available pods: 1
Jun 11 11:40:12.331: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:13.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:13.331: INFO: Number of nodes with available pods: 1
Jun 11 11:40:13.331: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:14.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:14.355: INFO: Number of nodes with available pods: 1
Jun 11 11:40:14.355: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:15.329: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:15.333: INFO: Number of nodes with available pods: 1
Jun 11 11:40:15.333: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:16.327: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:16.330: INFO: Number of nodes with available pods: 1
Jun 11 11:40:16.330: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:17.329: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:17.334: INFO: Number of nodes with available pods: 1
Jun 11 11:40:17.334: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:18.713: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:18.716: INFO: Number of nodes with available pods: 1
Jun 11 11:40:18.716: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:40:19.728: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:40:19.732: INFO: Number of nodes with available pods: 2
Jun 11 11:40:19.732: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-396, will wait for the garbage collector to delete the pods
Jun 11 11:40:19.930: INFO: Deleting DaemonSet.extensions daemon-set took: 24.422609ms
Jun 11 11:40:20.231: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.361696ms
Jun 11 11:40:33.834: INFO: Number of nodes with available pods: 0
Jun 11 11:40:33.834: INFO: Number of running nodes: 0, number of available pods: 0
Jun 11 11:40:33.836: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-396/daemonsets","resourceVersion":"11518660"},"items":null}

Jun 11 11:40:33.839: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-396/pods","resourceVersion":"11518660"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:40:33.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-396" for this suite.

• [SLOW TEST:28.779 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":138,"skipped":2241,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:40:33.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:40:33.949: INFO: Creating deployment "webserver-deployment"
Jun 11 11:40:33.961: INFO: Waiting for observed generation 1
Jun 11 11:40:35.971: INFO: Waiting for all required pods to come up
Jun 11 11:40:35.975: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jun 11 11:40:49.984: INFO: Waiting for deployment "webserver-deployment" to complete
Jun 11 11:40:49.989: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jun 11 11:40:49.995: INFO: Updating deployment webserver-deployment
Jun 11 11:40:49.995: INFO: Waiting for observed generation 2
Jun 11 11:40:52.192: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jun 11 11:40:52.196: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jun 11 11:40:52.199: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jun 11 11:40:52.205: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jun 11 11:40:52.205: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jun 11 11:40:52.383: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jun 11 11:40:52.389: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jun 11 11:40:52.389: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jun 11 11:40:52.402: INFO: Updating deployment webserver-deployment
Jun 11 11:40:52.402: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jun 11 11:40:52.663: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jun 11 11:40:53.031: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jun 11 11:40:53.195: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8882 /apis/apps/v1/namespaces/deployment-8882/deployments/webserver-deployment fcfe97a7-3099-4d61-8432-e6e665f945f2 11518917 3 2020-06-11 11:40:33 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00348dfd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-06-11 11:40:50 +0000 UTC,LastTransitionTime:2020-06-11 11:40:33 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-11 11:40:52 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jun 11 11:40:53.251: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-8882 /apis/apps/v1/namespaces/deployment-8882/replicasets/webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 11518972 3 2020-06-11 11:40:49 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment fcfe97a7-3099-4d61-8432-e6e665f945f2 0xc003cd6477 0xc003cd6478}] []  [{kube-controller-manager Update apps/v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 99 102 101 57 55 97 55 45 51 48 57 57 45 52 100 54 49 45 56 52 51 50 45 101 54 101 54 54 53 102 57 52 53 102 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cd64f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:40:53.251: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jun 11 11:40:53.251: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-8882 /apis/apps/v1/namespaces/deployment-8882/replicasets/webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 11518971 3 2020-06-11 11:40:33 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment fcfe97a7-3099-4d61-8432-e6e665f945f2 0xc003cd6557 0xc003cd6558}] []  [{kube-controller-manager Update apps/v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 99 102 101 57 55 97 55 45 51 48 57 57 45 52 100 54 49 45 56 52 51 50 45 101 54 101 54 54 53 102 57 52 53 102 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cd65c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:40:53.362: INFO: Pod "webserver-deployment-6676bcd6d4-2pprb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2pprb webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-2pprb be18dbdc-71db-445f-bb45-c81c0a2f2a20 11518919 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b327f7 0xc003b327f8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.363: INFO: Pod "webserver-deployment-6676bcd6d4-4gqw4" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4gqw4 webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-4gqw4 69badaa0-b31a-4ec1-8487-8b8992816892 11518890 0 2020-06-11 11:40:50 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b32947 0xc003b32948}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-06-11 11:40:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.363: INFO: Pod "webserver-deployment-6676bcd6d4-5rr6z" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5rr6z webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-5rr6z fbc952cc-042c-444c-ba31-2ac1a82541c2 11518883 0 2020-06-11 11:40:50 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b32b57 0xc003b32b58}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-06-11 11:40:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.363: INFO: Pod "webserver-deployment-6676bcd6d4-6rzbw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6rzbw webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-6rzbw af1d396f-8c02-4fa0-8194-0b4f6cc113f3 11518867 0 2020-06-11 11:40:50 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b32d37 0xc003b32d38}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-06-11 11:40:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.364: INFO: Pod "webserver-deployment-6676bcd6d4-b2cfj" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b2cfj webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-b2cfj 11d04527-75a6-4af2-a269-0555fa035f93 11518895 0 2020-06-11 11:40:50 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b32f87 0xc003b32f88}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-06-11 11:40:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.364: INFO: Pod "webserver-deployment-6676bcd6d4-h9znl" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h9znl webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-h9znl c3bb93f0-d313-42fa-962d-504fb83f5707 11518939 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b331c7 0xc003b331c8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.364: INFO: Pod "webserver-deployment-6676bcd6d4-j5kh7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-j5kh7 webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-j5kh7 80844cd8-6d57-45d8-87eb-c4d8a23c07ba 11518959 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33387 0xc003b33388}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.364: INFO: Pod "webserver-deployment-6676bcd6d4-kfbtw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kfbtw webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-kfbtw a4e6b521-7054-4fd2-94f3-ec2cabda6f9f 11518938 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33527 0xc003b33528}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.364: INFO: Pod "webserver-deployment-6676bcd6d4-kw4rw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kw4rw webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-kw4rw 842b5d49-6a76-42d1-994c-4834faf1b418 11518960 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b336d7 0xc003b336d8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.365: INFO: Pod "webserver-deployment-6676bcd6d4-l4kqk" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l4kqk webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-l4kqk 2582744a-18c3-4db9-b24e-bc63204d6eb9 11518873 0 2020-06-11 11:40:50 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33857 0xc003b33858}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-06-11 11:40:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.365: INFO: Pod "webserver-deployment-6676bcd6d4-lsmsh" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lsmsh webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-lsmsh aff7387a-a2ad-4918-beeb-68442c99d163 11518958 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33a57 0xc003b33a58}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.365: INFO: Pod "webserver-deployment-6676bcd6d4-p8ljb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p8ljb webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-p8ljb e2ecad8c-712f-4c15-8481-022f1e6560dd 11518957 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33b97 0xc003b33b98}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.366: INFO: Pod "webserver-deployment-6676bcd6d4-rswq4" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rswq4 webserver-deployment-6676bcd6d4- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-6676bcd6d4-rswq4 ea756e68-dbe2-41fc-98a1-b4b67d89c79a 11518969 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e31c8a6a-525f-4719-93f0-f055e4d6aad1 0xc003b33ce7 0xc003b33ce8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 51 49 99 56 97 54 97 45 53 50 53 102 45 52 55 49 57 45 57 51 102 48 45 102 48 53 53 101 52 100 54 97 97 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.366: INFO: Pod "webserver-deployment-84855cf797-44lxw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-44lxw webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-44lxw 593dd0dd-cc88-4345-8599-d1d31bf669e5 11518948 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003b33e47 0xc003b33e48}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.366: INFO: Pod "webserver-deployment-84855cf797-5knrc" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-5knrc webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-5knrc db0b3d56-1ad0-456b-a639-d66a10059aa0 11518980 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003b33fa7 0xc003b33fa8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-06-11 11:40:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.366: INFO: Pod "webserver-deployment-84855cf797-6mmks" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6mmks webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-6mmks 3782a7d1-9bc3-415f-97b3-01096c1e2dd2 11518927 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d721a7 0xc003d721a8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.366: INFO: Pod "webserver-deployment-84855cf797-77m2m" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-77m2m webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-77m2m 28ae74e9-9563-4715-9c6a-234a88bbe8de 11518954 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72327 0xc003d72328}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.367: INFO: Pod "webserver-deployment-84855cf797-7dqbq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7dqbq webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-7dqbq 3b64b9d8-db10-4749-968f-661d18fecf90 11518925 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d724a7 0xc003d724a8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.367: INFO: Pod "webserver-deployment-84855cf797-8rkv6" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8rkv6 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-8rkv6 af2c2dc5-285f-43be-a15f-0a3d6397ee12 11518836 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72637 0xc003d72638}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.232,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a47ed36833ba586441aac2b36f7e1ee1be41b6e39952da43d460ac8f9f3ff292,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.367: INFO: Pod "webserver-deployment-84855cf797-bncnw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bncnw webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-bncnw 45f11c8e-0819-441a-aa69-950b4dc5da92 11518979 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72867 0xc003d72868}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-06-11 11:40:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.367: INFO: Pod "webserver-deployment-84855cf797-cdcmb" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cdcmb webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-cdcmb da00b95f-07e0-4496-b066-da26e504e268 11518808 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72a17 0xc003d72a18}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.230,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://67de393015f30422331d367d13188e988a2e0f28cb1b3fae57628c7de5f75981,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.368: INFO: Pod "webserver-deployment-84855cf797-czth2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-czth2 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-czth2 f260977d-54e0-47a6-8bc5-c1d9bb4f89eb 11518781 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72bf7 0xc003d72bf8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.25,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://de79e9380decaf8946c5fb6ca9fae7e4579335c7771fac3d88a930e425c00c8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.368: INFO: Pod "webserver-deployment-84855cf797-f4kv9" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-f4kv9 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-f4kv9 2fc80291-4a71-4e75-b109-a7f60f57183b 11518973 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72dd7 0xc003d72dd8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-06-11 11:40:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.368: INFO: Pod "webserver-deployment-84855cf797-fxkz5" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-fxkz5 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-fxkz5 b14f15a9-8e9a-49af-bf7d-68dd409a6969 11518809 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d72fd7 0xc003d72fd8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.27,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0d56057623acbb573335d3ca90e05ae36b5ff0b3bb3a8dba301c0d55378c78b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.369: INFO: Pod "webserver-deployment-84855cf797-gw6v9" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gw6v9 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-gw6v9 d193057b-c364-4f06-8766-e3540fd664f6 11518955 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d731c7 0xc003d731c8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.369: INFO: Pod "webserver-deployment-84855cf797-hc424" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hc424 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-hc424 762fa6f6-c711-4fe7-a484-5a19fddbbbed 11518953 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73317 0xc003d73318}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.369: INFO: Pod "webserver-deployment-84855cf797-jnrvm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jnrvm webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-jnrvm 61365449-8271-4a54-a13b-6b81b19b1480 11518926 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73457 0xc003d73458}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.369: INFO: Pod "webserver-deployment-84855cf797-l9nd7" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-l9nd7 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-l9nd7 99cfc569-4ea6-49cb-9728-f8e1cbcc021c 11518956 0 2020-06-11 11:40:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d735b7 0xc003d735b8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.370: INFO: Pod "webserver-deployment-84855cf797-lzzpf" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lzzpf webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-lzzpf 5eae3438-688e-4dd7-a66c-d8b5ac972203 11518791 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73707 0xc003d73708}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 50 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.229,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9e5637b653c0af2715a02d70bf2d0af60076a2cc85aa13b39bbdd5b0aec3f163,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.370: INFO: Pod "webserver-deployment-84855cf797-nhc22" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nhc22 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-nhc22 407df09e-3d97-45f0-8269-3f6ed268f9db 11518770 0 2020-06-11 11:40:33 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73907 0xc003d73908}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 50 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.228,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4867a831b3b6ee409fd361b7a8e83067febc1a5d4e1d152ebf9c017c37432ff4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.370: INFO: Pod "webserver-deployment-84855cf797-twrfm" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-twrfm webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-twrfm 9767d5e1-7001-4199-a9d0-dc8b2605ffcb 11518833 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73ab7 0xc003d73ab8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.231,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ba1a8bd31b7e7cfbb1aee91bacb67c4e843c08933f9f9c350b51095005fbbaee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.370: INFO: Pod "webserver-deployment-84855cf797-xgndh" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xgndh webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-xgndh 139dc416-79f3-4057-baf3-73e23721a6d9 11518929 0 2020-06-11 11:40:52 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73ce7 0xc003d73ce8}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 11 11:40:53.371: INFO: Pod "webserver-deployment-84855cf797-xwqq7" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xwqq7 webserver-deployment-84855cf797- deployment-8882 /api/v1/namespaces/deployment-8882/pods/webserver-deployment-84855cf797-xwqq7 14128765-d3f9-4273-80cf-57d1f3e2e60c 11518823 0 2020-06-11 11:40:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9f4a11f0-ebcd-4eec-8634-7aa13ba409d2 0xc003d73e17 0xc003d73e18}] []  [{kube-controller-manager Update v1 2020-06-11 11:40:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 102 52 97 49 49 102 48 45 101 98 99 100 45 52 101 101 99 45 56 54 51 52 45 55 97 97 49 51 98 97 52 48 57 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:40:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5dhb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5dhb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5dhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.28,StartTime:2020-06-11 11:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:40:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://04a78bfbf5aca4f6a1fe5979c4ca024b88d7a98269476ec4fd161119ed544418,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:40:53.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8882" for this suite.

• [SLOW TEST:19.582 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":139,"skipped":2253,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:40:53.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:40:53.807: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:40:56.024: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:40:58.000: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:00.696: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:01.930: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:03.983: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:06.140: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:08.252: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:09.910: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:11.814: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:41:13.827: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:15.833: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:17.811: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:19.813: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:21.811: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:23.811: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:25.810: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:27.811: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:29.812: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = false)
Jun 11 11:41:31.812: INFO: The status of Pod test-webserver-c7174224-4f55-49cf-8db0-e1baa80bc2a4 is Running (Ready = true)
Jun 11 11:41:31.815: INFO: Container started at 2020-06-11 11:41:11 +0000 UTC, pod became ready at 2020-06-11 11:41:30 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:41:31.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5909" for this suite.

• [SLOW TEST:38.387 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:41:31.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-2b288cb8-ea32-4d7f-957f-59cb5aacc69b
STEP: Creating a pod to test consume secrets
Jun 11 11:41:31.918: INFO: Waiting up to 5m0s for pod "pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f" in namespace "secrets-5218" to be "Succeeded or Failed"
Jun 11 11:41:31.934: INFO: Pod "pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.122378ms
Jun 11 11:41:33.938: INFO: Pod "pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019298662s
Jun 11 11:41:35.942: INFO: Pod "pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023687949s
STEP: Saw pod success
Jun 11 11:41:35.942: INFO: Pod "pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f" satisfied condition "Succeeded or Failed"
Jun 11 11:41:35.945: INFO: Trying to get logs from node kali-worker pod pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f container secret-env-test: 
STEP: delete the pod
Jun 11 11:41:36.020: INFO: Waiting for pod pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f to disappear
Jun 11 11:41:36.030: INFO: Pod pod-secrets-40f183dd-decb-4b7d-81dd-7a3957f7e09f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:41:36.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5218" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2326,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:41:36.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:41:47.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2744" for this suite.

• [SLOW TEST:11.818 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":142,"skipped":2327,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:41:47.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jun 11 11:41:49.690: INFO: Pod name wrapped-volume-race-e3b88cb4-451a-4309-81a5-48add809e759: Found 0 pods out of 5
Jun 11 11:41:54.701: INFO: Pod name wrapped-volume-race-e3b88cb4-451a-4309-81a5-48add809e759: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e3b88cb4-451a-4309-81a5-48add809e759 in namespace emptydir-wrapper-8639, will wait for the garbage collector to delete the pods
Jun 11 11:42:08.790: INFO: Deleting ReplicationController wrapped-volume-race-e3b88cb4-451a-4309-81a5-48add809e759 took: 12.063768ms
Jun 11 11:42:09.090: INFO: Terminating ReplicationController wrapped-volume-race-e3b88cb4-451a-4309-81a5-48add809e759 pods took: 300.247169ms
STEP: Creating RC which spawns configmap-volume pods
Jun 11 11:42:24.541: INFO: Pod name wrapped-volume-race-9f10472f-7a15-4580-bb4d-c9f3c86d8b9c: Found 0 pods out of 5
Jun 11 11:42:30.903: INFO: Pod name wrapped-volume-race-9f10472f-7a15-4580-bb4d-c9f3c86d8b9c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9f10472f-7a15-4580-bb4d-c9f3c86d8b9c in namespace emptydir-wrapper-8639, will wait for the garbage collector to delete the pods
Jun 11 11:42:45.016: INFO: Deleting ReplicationController wrapped-volume-race-9f10472f-7a15-4580-bb4d-c9f3c86d8b9c took: 24.740981ms
Jun 11 11:42:45.716: INFO: Terminating ReplicationController wrapped-volume-race-9f10472f-7a15-4580-bb4d-c9f3c86d8b9c pods took: 700.29533ms
STEP: Creating RC which spawns configmap-volume pods
Jun 11 11:43:04.083: INFO: Pod name wrapped-volume-race-eb80abb1-926c-4fd9-9966-f5699925eee8: Found 0 pods out of 5
Jun 11 11:43:09.422: INFO: Pod name wrapped-volume-race-eb80abb1-926c-4fd9-9966-f5699925eee8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-eb80abb1-926c-4fd9-9966-f5699925eee8 in namespace emptydir-wrapper-8639, will wait for the garbage collector to delete the pods
Jun 11 11:43:23.554: INFO: Deleting ReplicationController wrapped-volume-race-eb80abb1-926c-4fd9-9966-f5699925eee8 took: 7.577593ms
Jun 11 11:43:23.954: INFO: Terminating ReplicationController wrapped-volume-race-eb80abb1-926c-4fd9-9966-f5699925eee8 pods took: 400.283848ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:43:34.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8639" for this suite.

• [SLOW TEST:106.480 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":143,"skipped":2332,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:43:34.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 11 11:43:34.406: INFO: Waiting up to 5m0s for pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d" in namespace "emptydir-939" to be "Succeeded or Failed"
Jun 11 11:43:34.445: INFO: Pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.003033ms
Jun 11 11:43:36.449: INFO: Pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043328295s
Jun 11 11:43:38.453: INFO: Pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d": Phase="Running", Reason="", readiness=true. Elapsed: 4.047389563s
Jun 11 11:43:40.468: INFO: Pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06269091s
STEP: Saw pod success
Jun 11 11:43:40.469: INFO: Pod "pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d" satisfied condition "Succeeded or Failed"
Jun 11 11:43:40.471: INFO: Trying to get logs from node kali-worker pod pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d container test-container: 
STEP: delete the pod
Jun 11 11:43:40.557: INFO: Waiting for pod pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d to disappear
Jun 11 11:43:40.563: INFO: Pod pod-b5b3e6b9-cd73-4242-abab-5c0d5c029f6d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:43:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-939" for this suite.

• [SLOW TEST:6.266 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2341,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:43:40.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jun 11 11:43:40.691: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:43:48.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2063" for this suite.

• [SLOW TEST:8.186 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":145,"skipped":2343,"failed":0}
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:43:48.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8146.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8146.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8146.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8146.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8146.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8146.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:43:54.930: INFO: DNS probes using dns-8146/dns-test-efb47457-c0b4-4a46-aa49-3911ec380301 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:43:54.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8146" for this suite.

• [SLOW TEST:6.216 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":146,"skipped":2343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:43:55.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jun 11 11:44:11.320: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.320: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.354966       7 log.go:172] (0xc00251a210) (0xc00174b2c0) Create stream
I0611 11:44:11.354999       7 log.go:172] (0xc00251a210) (0xc00174b2c0) Stream added, broadcasting: 1
I0611 11:44:11.356550       7 log.go:172] (0xc00251a210) Reply frame received for 1
I0611 11:44:11.356574       7 log.go:172] (0xc00251a210) (0xc002485860) Create stream
I0611 11:44:11.356581       7 log.go:172] (0xc00251a210) (0xc002485860) Stream added, broadcasting: 3
I0611 11:44:11.357497       7 log.go:172] (0xc00251a210) Reply frame received for 3
I0611 11:44:11.357545       7 log.go:172] (0xc00251a210) (0xc002acebe0) Create stream
I0611 11:44:11.357555       7 log.go:172] (0xc00251a210) (0xc002acebe0) Stream added, broadcasting: 5
I0611 11:44:11.358306       7 log.go:172] (0xc00251a210) Reply frame received for 5
I0611 11:44:11.416355       7 log.go:172] (0xc00251a210) Data frame received for 5
I0611 11:44:11.416385       7 log.go:172] (0xc002acebe0) (5) Data frame handling
I0611 11:44:11.416404       7 log.go:172] (0xc00251a210) Data frame received for 3
I0611 11:44:11.416411       7 log.go:172] (0xc002485860) (3) Data frame handling
I0611 11:44:11.416419       7 log.go:172] (0xc002485860) (3) Data frame sent
I0611 11:44:11.416432       7 log.go:172] (0xc00251a210) Data frame received for 3
I0611 11:44:11.416437       7 log.go:172] (0xc002485860) (3) Data frame handling
I0611 11:44:11.417890       7 log.go:172] (0xc00251a210) Data frame received for 1
I0611 11:44:11.417905       7 log.go:172] (0xc00174b2c0) (1) Data frame handling
I0611 11:44:11.417917       7 log.go:172] (0xc00174b2c0) (1) Data frame sent
I0611 11:44:11.418189       7 log.go:172] (0xc00251a210) (0xc00174b2c0) Stream removed, broadcasting: 1
I0611 11:44:11.418234       7 log.go:172] (0xc00251a210) Go away received
I0611 11:44:11.418319       7 log.go:172] (0xc00251a210) (0xc00174b2c0) Stream removed, broadcasting: 1
I0611 11:44:11.418339       7 log.go:172] (0xc00251a210) (0xc002485860) Stream removed, broadcasting: 3
I0611 11:44:11.418351       7 log.go:172] (0xc00251a210) (0xc002acebe0) Stream removed, broadcasting: 5
Jun 11 11:44:11.418: INFO: Exec stderr: ""
Jun 11 11:44:11.418: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.418: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.449048       7 log.go:172] (0xc001f00420) (0xc002485d60) Create stream
I0611 11:44:11.449073       7 log.go:172] (0xc001f00420) (0xc002485d60) Stream added, broadcasting: 1
I0611 11:44:11.451067       7 log.go:172] (0xc001f00420) Reply frame received for 1
I0611 11:44:11.451134       7 log.go:172] (0xc001f00420) (0xc002426000) Create stream
I0611 11:44:11.451154       7 log.go:172] (0xc001f00420) (0xc002426000) Stream added, broadcasting: 3
I0611 11:44:11.452198       7 log.go:172] (0xc001f00420) Reply frame received for 3
I0611 11:44:11.452253       7 log.go:172] (0xc001f00420) (0xc002acec80) Create stream
I0611 11:44:11.452268       7 log.go:172] (0xc001f00420) (0xc002acec80) Stream added, broadcasting: 5
I0611 11:44:11.453270       7 log.go:172] (0xc001f00420) Reply frame received for 5
I0611 11:44:11.505483       7 log.go:172] (0xc001f00420) Data frame received for 5
I0611 11:44:11.505518       7 log.go:172] (0xc002acec80) (5) Data frame handling
I0611 11:44:11.505543       7 log.go:172] (0xc001f00420) Data frame received for 3
I0611 11:44:11.505564       7 log.go:172] (0xc002426000) (3) Data frame handling
I0611 11:44:11.505579       7 log.go:172] (0xc002426000) (3) Data frame sent
I0611 11:44:11.505589       7 log.go:172] (0xc001f00420) Data frame received for 3
I0611 11:44:11.505598       7 log.go:172] (0xc002426000) (3) Data frame handling
I0611 11:44:11.506618       7 log.go:172] (0xc001f00420) Data frame received for 1
I0611 11:44:11.506638       7 log.go:172] (0xc002485d60) (1) Data frame handling
I0611 11:44:11.506650       7 log.go:172] (0xc002485d60) (1) Data frame sent
I0611 11:44:11.506674       7 log.go:172] (0xc001f00420) (0xc002485d60) Stream removed, broadcasting: 1
I0611 11:44:11.506710       7 log.go:172] (0xc001f00420) Go away received
I0611 11:44:11.506754       7 log.go:172] (0xc001f00420) (0xc002485d60) Stream removed, broadcasting: 1
I0611 11:44:11.506787       7 log.go:172] (0xc001f00420) (0xc002426000) Stream removed, broadcasting: 3
I0611 11:44:11.506815       7 log.go:172] (0xc001f00420) (0xc002acec80) Stream removed, broadcasting: 5
Jun 11 11:44:11.506: INFO: Exec stderr: ""
Jun 11 11:44:11.506: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.506: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.539277       7 log.go:172] (0xc00251a8f0) (0xc00174b5e0) Create stream
I0611 11:44:11.539317       7 log.go:172] (0xc00251a8f0) (0xc00174b5e0) Stream added, broadcasting: 1
I0611 11:44:11.541661       7 log.go:172] (0xc00251a8f0) Reply frame received for 1
I0611 11:44:11.541713       7 log.go:172] (0xc00251a8f0) (0xc0024260a0) Create stream
I0611 11:44:11.541731       7 log.go:172] (0xc00251a8f0) (0xc0024260a0) Stream added, broadcasting: 3
I0611 11:44:11.542796       7 log.go:172] (0xc00251a8f0) Reply frame received for 3
I0611 11:44:11.542830       7 log.go:172] (0xc00251a8f0) (0xc002aced20) Create stream
I0611 11:44:11.542846       7 log.go:172] (0xc00251a8f0) (0xc002aced20) Stream added, broadcasting: 5
I0611 11:44:11.543856       7 log.go:172] (0xc00251a8f0) Reply frame received for 5
I0611 11:44:11.606982       7 log.go:172] (0xc00251a8f0) Data frame received for 3
I0611 11:44:11.607008       7 log.go:172] (0xc0024260a0) (3) Data frame handling
I0611 11:44:11.607015       7 log.go:172] (0xc0024260a0) (3) Data frame sent
I0611 11:44:11.607020       7 log.go:172] (0xc00251a8f0) Data frame received for 3
I0611 11:44:11.607024       7 log.go:172] (0xc0024260a0) (3) Data frame handling
I0611 11:44:11.607046       7 log.go:172] (0xc00251a8f0) Data frame received for 5
I0611 11:44:11.607075       7 log.go:172] (0xc002aced20) (5) Data frame handling
I0611 11:44:11.608307       7 log.go:172] (0xc00251a8f0) Data frame received for 1
I0611 11:44:11.608333       7 log.go:172] (0xc00174b5e0) (1) Data frame handling
I0611 11:44:11.608345       7 log.go:172] (0xc00174b5e0) (1) Data frame sent
I0611 11:44:11.608361       7 log.go:172] (0xc00251a8f0) (0xc00174b5e0) Stream removed, broadcasting: 1
I0611 11:44:11.608390       7 log.go:172] (0xc00251a8f0) Go away received
I0611 11:44:11.608501       7 log.go:172] (0xc00251a8f0) (0xc00174b5e0) Stream removed, broadcasting: 1
I0611 11:44:11.608530       7 log.go:172] (0xc00251a8f0) (0xc0024260a0) Stream removed, broadcasting: 3
I0611 11:44:11.608544       7 log.go:172] (0xc00251a8f0) (0xc002aced20) Stream removed, broadcasting: 5
Jun 11 11:44:11.608: INFO: Exec stderr: ""
Jun 11 11:44:11.608: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.608: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.646006       7 log.go:172] (0xc001f00a50) (0xc0022b6320) Create stream
I0611 11:44:11.646036       7 log.go:172] (0xc001f00a50) (0xc0022b6320) Stream added, broadcasting: 1
I0611 11:44:11.647824       7 log.go:172] (0xc001f00a50) Reply frame received for 1
I0611 11:44:11.647850       7 log.go:172] (0xc001f00a50) (0xc002acee60) Create stream
I0611 11:44:11.647859       7 log.go:172] (0xc001f00a50) (0xc002acee60) Stream added, broadcasting: 3
I0611 11:44:11.648723       7 log.go:172] (0xc001f00a50) Reply frame received for 3
I0611 11:44:11.648751       7 log.go:172] (0xc001f00a50) (0xc00174b680) Create stream
I0611 11:44:11.648762       7 log.go:172] (0xc001f00a50) (0xc00174b680) Stream added, broadcasting: 5
I0611 11:44:11.649685       7 log.go:172] (0xc001f00a50) Reply frame received for 5
I0611 11:44:11.709503       7 log.go:172] (0xc001f00a50) Data frame received for 3
I0611 11:44:11.709555       7 log.go:172] (0xc002acee60) (3) Data frame handling
I0611 11:44:11.709623       7 log.go:172] (0xc002acee60) (3) Data frame sent
I0611 11:44:11.709650       7 log.go:172] (0xc001f00a50) Data frame received for 3
I0611 11:44:11.709665       7 log.go:172] (0xc002acee60) (3) Data frame handling
I0611 11:44:11.709679       7 log.go:172] (0xc001f00a50) Data frame received for 5
I0611 11:44:11.709691       7 log.go:172] (0xc00174b680) (5) Data frame handling
I0611 11:44:11.710879       7 log.go:172] (0xc001f00a50) Data frame received for 1
I0611 11:44:11.710911       7 log.go:172] (0xc0022b6320) (1) Data frame handling
I0611 11:44:11.710932       7 log.go:172] (0xc0022b6320) (1) Data frame sent
I0611 11:44:11.710949       7 log.go:172] (0xc001f00a50) (0xc0022b6320) Stream removed, broadcasting: 1
I0611 11:44:11.710968       7 log.go:172] (0xc001f00a50) Go away received
I0611 11:44:11.711111       7 log.go:172] (0xc001f00a50) (0xc0022b6320) Stream removed, broadcasting: 1
I0611 11:44:11.711138       7 log.go:172] (0xc001f00a50) (0xc002acee60) Stream removed, broadcasting: 3
I0611 11:44:11.711157       7 log.go:172] (0xc001f00a50) (0xc00174b680) Stream removed, broadcasting: 5
Jun 11 11:44:11.711: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jun 11 11:44:11.711: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.711: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.737771       7 log.go:172] (0xc002a93760) (0xc0017b2320) Create stream
I0611 11:44:11.737815       7 log.go:172] (0xc002a93760) (0xc0017b2320) Stream added, broadcasting: 1
I0611 11:44:11.739440       7 log.go:172] (0xc002a93760) Reply frame received for 1
I0611 11:44:11.739482       7 log.go:172] (0xc002a93760) (0xc0022b6460) Create stream
I0611 11:44:11.739495       7 log.go:172] (0xc002a93760) (0xc0022b6460) Stream added, broadcasting: 3
I0611 11:44:11.740257       7 log.go:172] (0xc002a93760) Reply frame received for 3
I0611 11:44:11.740286       7 log.go:172] (0xc002a93760) (0xc0022b6500) Create stream
I0611 11:44:11.740296       7 log.go:172] (0xc002a93760) (0xc0022b6500) Stream added, broadcasting: 5
I0611 11:44:11.741042       7 log.go:172] (0xc002a93760) Reply frame received for 5
I0611 11:44:11.819545       7 log.go:172] (0xc002a93760) Data frame received for 5
I0611 11:44:11.819581       7 log.go:172] (0xc0022b6500) (5) Data frame handling
I0611 11:44:11.819606       7 log.go:172] (0xc002a93760) Data frame received for 3
I0611 11:44:11.819614       7 log.go:172] (0xc0022b6460) (3) Data frame handling
I0611 11:44:11.819630       7 log.go:172] (0xc0022b6460) (3) Data frame sent
I0611 11:44:11.819637       7 log.go:172] (0xc002a93760) Data frame received for 3
I0611 11:44:11.819644       7 log.go:172] (0xc0022b6460) (3) Data frame handling
I0611 11:44:11.820997       7 log.go:172] (0xc002a93760) Data frame received for 1
I0611 11:44:11.821023       7 log.go:172] (0xc0017b2320) (1) Data frame handling
I0611 11:44:11.821033       7 log.go:172] (0xc0017b2320) (1) Data frame sent
I0611 11:44:11.821044       7 log.go:172] (0xc002a93760) (0xc0017b2320) Stream removed, broadcasting: 1
I0611 11:44:11.821087       7 log.go:172] (0xc002a93760) Go away received
I0611 11:44:11.821379       7 log.go:172] (0xc002a93760) (0xc0017b2320) Stream removed, broadcasting: 1
I0611 11:44:11.821418       7 log.go:172] (0xc002a93760) (0xc0022b6460) Stream removed, broadcasting: 3
I0611 11:44:11.821443       7 log.go:172] (0xc002a93760) (0xc0022b6500) Stream removed, broadcasting: 5
Jun 11 11:44:11.821: INFO: Exec stderr: ""
Jun 11 11:44:11.821: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.821: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.853973       7 log.go:172] (0xc00251af20) (0xc00174b860) Create stream
I0611 11:44:11.854001       7 log.go:172] (0xc00251af20) (0xc00174b860) Stream added, broadcasting: 1
I0611 11:44:11.855762       7 log.go:172] (0xc00251af20) Reply frame received for 1
I0611 11:44:11.855799       7 log.go:172] (0xc00251af20) (0xc00174bcc0) Create stream
I0611 11:44:11.855813       7 log.go:172] (0xc00251af20) (0xc00174bcc0) Stream added, broadcasting: 3
I0611 11:44:11.856747       7 log.go:172] (0xc00251af20) Reply frame received for 3
I0611 11:44:11.856780       7 log.go:172] (0xc00251af20) (0xc0017b2460) Create stream
I0611 11:44:11.856793       7 log.go:172] (0xc00251af20) (0xc0017b2460) Stream added, broadcasting: 5
I0611 11:44:11.858050       7 log.go:172] (0xc00251af20) Reply frame received for 5
I0611 11:44:11.922808       7 log.go:172] (0xc00251af20) Data frame received for 3
I0611 11:44:11.922840       7 log.go:172] (0xc00174bcc0) (3) Data frame handling
I0611 11:44:11.922862       7 log.go:172] (0xc00174bcc0) (3) Data frame sent
I0611 11:44:11.922880       7 log.go:172] (0xc00251af20) Data frame received for 3
I0611 11:44:11.922895       7 log.go:172] (0xc00174bcc0) (3) Data frame handling
I0611 11:44:11.923109       7 log.go:172] (0xc00251af20) Data frame received for 5
I0611 11:44:11.923165       7 log.go:172] (0xc0017b2460) (5) Data frame handling
I0611 11:44:11.924853       7 log.go:172] (0xc00251af20) Data frame received for 1
I0611 11:44:11.924871       7 log.go:172] (0xc00174b860) (1) Data frame handling
I0611 11:44:11.924887       7 log.go:172] (0xc00174b860) (1) Data frame sent
I0611 11:44:11.924901       7 log.go:172] (0xc00251af20) (0xc00174b860) Stream removed, broadcasting: 1
I0611 11:44:11.924971       7 log.go:172] (0xc00251af20) (0xc00174b860) Stream removed, broadcasting: 1
I0611 11:44:11.924982       7 log.go:172] (0xc00251af20) (0xc00174bcc0) Stream removed, broadcasting: 3
I0611 11:44:11.925024       7 log.go:172] (0xc00251af20) Go away received
I0611 11:44:11.925324       7 log.go:172] (0xc00251af20) (0xc0017b2460) Stream removed, broadcasting: 5
Jun 11 11:44:11.925: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jun 11 11:44:11.925: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:11.925: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:11.957994       7 log.go:172] (0xc00251b550) (0xc002a64000) Create stream
I0611 11:44:11.958026       7 log.go:172] (0xc00251b550) (0xc002a64000) Stream added, broadcasting: 1
I0611 11:44:11.959600       7 log.go:172] (0xc00251b550) Reply frame received for 1
I0611 11:44:11.959656       7 log.go:172] (0xc00251b550) (0xc002acf040) Create stream
I0611 11:44:11.959677       7 log.go:172] (0xc00251b550) (0xc002acf040) Stream added, broadcasting: 3
I0611 11:44:11.960713       7 log.go:172] (0xc00251b550) Reply frame received for 3
I0611 11:44:11.960763       7 log.go:172] (0xc00251b550) (0xc002426140) Create stream
I0611 11:44:11.960778       7 log.go:172] (0xc00251b550) (0xc002426140) Stream added, broadcasting: 5
I0611 11:44:11.961825       7 log.go:172] (0xc00251b550) Reply frame received for 5
I0611 11:44:12.030055       7 log.go:172] (0xc00251b550) Data frame received for 5
I0611 11:44:12.030080       7 log.go:172] (0xc002426140) (5) Data frame handling
I0611 11:44:12.030319       7 log.go:172] (0xc00251b550) Data frame received for 3
I0611 11:44:12.030367       7 log.go:172] (0xc002acf040) (3) Data frame handling
I0611 11:44:12.030392       7 log.go:172] (0xc002acf040) (3) Data frame sent
I0611 11:44:12.030416       7 log.go:172] (0xc00251b550) Data frame received for 3
I0611 11:44:12.030432       7 log.go:172] (0xc002acf040) (3) Data frame handling
I0611 11:44:12.032105       7 log.go:172] (0xc00251b550) Data frame received for 1
I0611 11:44:12.032164       7 log.go:172] (0xc002a64000) (1) Data frame handling
I0611 11:44:12.032200       7 log.go:172] (0xc002a64000) (1) Data frame sent
I0611 11:44:12.032229       7 log.go:172] (0xc00251b550) (0xc002a64000) Stream removed, broadcasting: 1
I0611 11:44:12.032253       7 log.go:172] (0xc00251b550) Go away received
I0611 11:44:12.032350       7 log.go:172] (0xc00251b550) (0xc002a64000) Stream removed, broadcasting: 1
I0611 11:44:12.032365       7 log.go:172] (0xc00251b550) (0xc002acf040) Stream removed, broadcasting: 3
I0611 11:44:12.032374       7 log.go:172] (0xc00251b550) (0xc002426140) Stream removed, broadcasting: 5
Jun 11 11:44:12.032: INFO: Exec stderr: ""
Jun 11 11:44:12.032: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:12.032: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:12.064199       7 log.go:172] (0xc002a93d90) (0xc0017b2820) Create stream
I0611 11:44:12.064227       7 log.go:172] (0xc002a93d90) (0xc0017b2820) Stream added, broadcasting: 1
I0611 11:44:12.066299       7 log.go:172] (0xc002a93d90) Reply frame received for 1
I0611 11:44:12.066319       7 log.go:172] (0xc002a93d90) (0xc0017b2960) Create stream
I0611 11:44:12.066325       7 log.go:172] (0xc002a93d90) (0xc0017b2960) Stream added, broadcasting: 3
I0611 11:44:12.067478       7 log.go:172] (0xc002a93d90) Reply frame received for 3
I0611 11:44:12.067511       7 log.go:172] (0xc002a93d90) (0xc0017b2aa0) Create stream
I0611 11:44:12.067523       7 log.go:172] (0xc002a93d90) (0xc0017b2aa0) Stream added, broadcasting: 5
I0611 11:44:12.068760       7 log.go:172] (0xc002a93d90) Reply frame received for 5
I0611 11:44:12.146360       7 log.go:172] (0xc002a93d90) Data frame received for 3
I0611 11:44:12.146395       7 log.go:172] (0xc0017b2960) (3) Data frame handling
I0611 11:44:12.146409       7 log.go:172] (0xc0017b2960) (3) Data frame sent
I0611 11:44:12.146419       7 log.go:172] (0xc002a93d90) Data frame received for 3
I0611 11:44:12.146426       7 log.go:172] (0xc0017b2960) (3) Data frame handling
I0611 11:44:12.146553       7 log.go:172] (0xc002a93d90) Data frame received for 5
I0611 11:44:12.146592       7 log.go:172] (0xc0017b2aa0) (5) Data frame handling
I0611 11:44:12.148578       7 log.go:172] (0xc002a93d90) Data frame received for 1
I0611 11:44:12.148619       7 log.go:172] (0xc0017b2820) (1) Data frame handling
I0611 11:44:12.148637       7 log.go:172] (0xc0017b2820) (1) Data frame sent
I0611 11:44:12.148647       7 log.go:172] (0xc002a93d90) (0xc0017b2820) Stream removed, broadcasting: 1
I0611 11:44:12.148664       7 log.go:172] (0xc002a93d90) Go away received
I0611 11:44:12.148824       7 log.go:172] (0xc002a93d90) (0xc0017b2820) Stream removed, broadcasting: 1
I0611 11:44:12.148850       7 log.go:172] (0xc002a93d90) (0xc0017b2960) Stream removed, broadcasting: 3
I0611 11:44:12.148859       7 log.go:172] (0xc002a93d90) (0xc0017b2aa0) Stream removed, broadcasting: 5
Jun 11 11:44:12.148: INFO: Exec stderr: ""
Jun 11 11:44:12.148: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:12.148: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:12.331865       7 log.go:172] (0xc001e9a420) (0xc002acf4a0) Create stream
I0611 11:44:12.331921       7 log.go:172] (0xc001e9a420) (0xc002acf4a0) Stream added, broadcasting: 1
I0611 11:44:12.334285       7 log.go:172] (0xc001e9a420) Reply frame received for 1
I0611 11:44:12.334335       7 log.go:172] (0xc001e9a420) (0xc0024261e0) Create stream
I0611 11:44:12.334348       7 log.go:172] (0xc001e9a420) (0xc0024261e0) Stream added, broadcasting: 3
I0611 11:44:12.335384       7 log.go:172] (0xc001e9a420) Reply frame received for 3
I0611 11:44:12.335418       7 log.go:172] (0xc001e9a420) (0xc0022b6640) Create stream
I0611 11:44:12.335429       7 log.go:172] (0xc001e9a420) (0xc0022b6640) Stream added, broadcasting: 5
I0611 11:44:12.336281       7 log.go:172] (0xc001e9a420) Reply frame received for 5
I0611 11:44:12.406075       7 log.go:172] (0xc001e9a420) Data frame received for 5
I0611 11:44:12.406131       7 log.go:172] (0xc0022b6640) (5) Data frame handling
I0611 11:44:12.406162       7 log.go:172] (0xc001e9a420) Data frame received for 3
I0611 11:44:12.406186       7 log.go:172] (0xc0024261e0) (3) Data frame handling
I0611 11:44:12.406228       7 log.go:172] (0xc0024261e0) (3) Data frame sent
I0611 11:44:12.406250       7 log.go:172] (0xc001e9a420) Data frame received for 3
I0611 11:44:12.406272       7 log.go:172] (0xc0024261e0) (3) Data frame handling
I0611 11:44:12.407296       7 log.go:172] (0xc001e9a420) Data frame received for 1
I0611 11:44:12.407343       7 log.go:172] (0xc002acf4a0) (1) Data frame handling
I0611 11:44:12.407375       7 log.go:172] (0xc002acf4a0) (1) Data frame sent
I0611 11:44:12.407405       7 log.go:172] (0xc001e9a420) (0xc002acf4a0) Stream removed, broadcasting: 1
I0611 11:44:12.407447       7 log.go:172] (0xc001e9a420) Go away received
I0611 11:44:12.407549       7 log.go:172] (0xc001e9a420) (0xc002acf4a0) Stream removed, broadcasting: 1
I0611 11:44:12.407592       7 log.go:172] (0xc001e9a420) (0xc0024261e0) Stream removed, broadcasting: 3
I0611 11:44:12.407615       7 log.go:172] (0xc001e9a420) (0xc0022b6640) Stream removed, broadcasting: 5
Jun 11 11:44:12.407: INFO: Exec stderr: ""
Jun 11 11:44:12.407: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-455 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:44:12.407: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:44:12.440317       7 log.go:172] (0xc001e9aa50) (0xc002acf720) Create stream
I0611 11:44:12.440348       7 log.go:172] (0xc001e9aa50) (0xc002acf720) Stream added, broadcasting: 1
I0611 11:44:12.442400       7 log.go:172] (0xc001e9aa50) Reply frame received for 1
I0611 11:44:12.442427       7 log.go:172] (0xc001e9aa50) (0xc002acf900) Create stream
I0611 11:44:12.442437       7 log.go:172] (0xc001e9aa50) (0xc002acf900) Stream added, broadcasting: 3
I0611 11:44:12.443346       7 log.go:172] (0xc001e9aa50) Reply frame received for 3
I0611 11:44:12.443379       7 log.go:172] (0xc001e9aa50) (0xc002acfa40) Create stream
I0611 11:44:12.443397       7 log.go:172] (0xc001e9aa50) (0xc002acfa40) Stream added, broadcasting: 5
I0611 11:44:12.444250       7 log.go:172] (0xc001e9aa50) Reply frame received for 5
I0611 11:44:12.497971       7 log.go:172] (0xc001e9aa50) Data frame received for 5
I0611 11:44:12.497999       7 log.go:172] (0xc002acfa40) (5) Data frame handling
I0611 11:44:12.498021       7 log.go:172] (0xc001e9aa50) Data frame received for 3
I0611 11:44:12.498040       7 log.go:172] (0xc002acf900) (3) Data frame handling
I0611 11:44:12.498049       7 log.go:172] (0xc002acf900) (3) Data frame sent
I0611 11:44:12.498059       7 log.go:172] (0xc001e9aa50) Data frame received for 3
I0611 11:44:12.498064       7 log.go:172] (0xc002acf900) (3) Data frame handling
I0611 11:44:12.499585       7 log.go:172] (0xc001e9aa50) Data frame received for 1
I0611 11:44:12.499604       7 log.go:172] (0xc002acf720) (1) Data frame handling
I0611 11:44:12.499626       7 log.go:172] (0xc002acf720) (1) Data frame sent
I0611 11:44:12.499634       7 log.go:172] (0xc001e9aa50) (0xc002acf720) Stream removed, broadcasting: 1
I0611 11:44:12.499656       7 log.go:172] (0xc001e9aa50) Go away received
I0611 11:44:12.499805       7 log.go:172] (0xc001e9aa50) (0xc002acf720) Stream removed, broadcasting: 1
I0611 11:44:12.499854       7 log.go:172] (0xc001e9aa50) (0xc002acf900) Stream removed, broadcasting: 3
I0611 11:44:12.499895       7 log.go:172] (0xc001e9aa50) (0xc002acfa40) Stream removed, broadcasting: 5
Jun 11 11:44:12.499: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:12.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-455" for this suite.

• [SLOW TEST:17.499 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:12.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-da67139e-230d-4935-80ff-c74474571883
STEP: Creating a pod to test consume secrets
Jun 11 11:44:13.804: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640" in namespace "projected-7815" to be "Succeeded or Failed"
Jun 11 11:44:13.807: INFO: Pod "pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06414ms
Jun 11 11:44:15.883: INFO: Pod "pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078826697s
Jun 11 11:44:17.886: INFO: Pod "pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082384904s
STEP: Saw pod success
Jun 11 11:44:17.886: INFO: Pod "pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640" satisfied condition "Succeeded or Failed"
Jun 11 11:44:17.889: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640 container projected-secret-volume-test: 
STEP: delete the pod
Jun 11 11:44:18.033: INFO: Waiting for pod pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640 to disappear
Jun 11 11:44:18.043: INFO: Pod pod-projected-secrets-98860a69-902e-459d-bedc-1925c78b9640 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:18.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7815" for this suite.

• [SLOW TEST:5.542 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2416,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:18.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-be80a098-36d3-4009-9360-770b408c76d7
STEP: Creating a pod to test consume configMaps
Jun 11 11:44:18.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038" in namespace "configmap-9333" to be "Succeeded or Failed"
Jun 11 11:44:18.175: INFO: Pod "pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038": Phase="Pending", Reason="", readiness=false. Elapsed: 57.691181ms
Jun 11 11:44:20.200: INFO: Pod "pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08213392s
Jun 11 11:44:22.204: INFO: Pod "pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086024704s
STEP: Saw pod success
Jun 11 11:44:22.204: INFO: Pod "pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038" satisfied condition "Succeeded or Failed"
Jun 11 11:44:22.207: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038 container configmap-volume-test: 
STEP: delete the pod
Jun 11 11:44:22.362: INFO: Waiting for pod pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038 to disappear
Jun 11 11:44:22.378: INFO: Pod pod-configmaps-1be0c799-d9f5-427c-89f0-e598e9c85038 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9333" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2433,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:22.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:44:22.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a" in namespace "projected-6826" to be "Succeeded or Failed"
Jun 11 11:44:22.456: INFO: Pod "downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.320932ms
Jun 11 11:44:24.460: INFO: Pod "downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019272351s
Jun 11 11:44:26.546: INFO: Pod "downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105333007s
STEP: Saw pod success
Jun 11 11:44:26.547: INFO: Pod "downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a" satisfied condition "Succeeded or Failed"
Jun 11 11:44:26.549: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a container client-container: 
STEP: delete the pod
Jun 11 11:44:26.608: INFO: Waiting for pod downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a to disappear
Jun 11 11:44:26.612: INFO: Pod downwardapi-volume-41c10b37-2799-43eb-a211-5420192f121a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:26.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6826" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2474,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:26.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:26.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8868" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":151,"skipped":2488,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:26.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:44:27.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:44:29.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472667, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472667, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472667, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472667, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:44:32.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:44:33.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6344" for this suite.
STEP: Destroying namespace "webhook-6344-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.490 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":152,"skipped":2519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:44:33.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jun 11 11:44:33.262: INFO: PodSpec: initContainers in spec.initContainers
Jun 11 11:45:21.212: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-004a8f8f-181d-476f-a93c-d60262ff953b", GenerateName:"", Namespace:"init-container-295", SelfLink:"/api/v1/namespaces/init-container-295/pods/pod-init-004a8f8f-181d-476f-a93c-d60262ff953b", UID:"0a058511-dde0-4acd-a683-2d5fbc079a3f", ResourceVersion:"11521211", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727472673, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"262478793"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d19d60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d19e00)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d19e20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d19e60)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-54c9p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0034d4e40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-54c9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-54c9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-54c9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c23858), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023d9420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c238e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c23900)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c23908), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c2390c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472673, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472673, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472673, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472673, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.254", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.254"}}, StartTime:(*v1.Time)(0xc001d19e80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023d9500)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023d9570)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://788db77aff5cf2bbb9824ffe300fea68f3242ab78d0c76ed8bfa911c41cd09a5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d19f40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d19f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002c2398f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:45:21.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-295" for this suite.

• [SLOW TEST:48.057 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":153,"skipped":2551,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:45:21.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jun 11 11:45:21.366: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:45:28.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9850" for this suite.

• [SLOW TEST:7.728 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":154,"skipped":2560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:45:29.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jun 11 11:45:29.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5417'
Jun 11 11:45:34.346: INFO: stderr: ""
Jun 11 11:45:34.346: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jun 11 11:45:39.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5417 -o json'
Jun 11 11:45:39.502: INFO: stderr: ""
Jun 11 11:45:39.502: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-06-11T11:45:34Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-06-11T11:45:34Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.64\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-06-11T11:45:37Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-5417\",\n        \"resourceVersion\": \"11521321\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-5417/pods/e2e-test-httpd-pod\",\n        \"uid\": \"32a97ae5-290a-4f5f-8e3a-93dacb7d6641\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-gdcm2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-gdcm2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-gdcm2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-11T11:45:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-11T11:45:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-11T11:45:37Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-11T11:45:34Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://ce53cbce2a02b4d7927775705fbc4b2a57df2435d7a0b10d7b44665e3273c683\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-06-11T11:45:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.64\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.64\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-06-11T11:45:34Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jun 11 11:45:39.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5417'
Jun 11 11:45:39.832: INFO: stderr: ""
Jun 11 11:45:39.832: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jun 11 11:45:39.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5417'
Jun 11 11:45:53.724: INFO: stderr: ""
Jun 11 11:45:53.724: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:45:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5417" for this suite.

• [SLOW TEST:24.750 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":155,"skipped":2615,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:45:53.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4257" for this suite.

• [SLOW TEST:16.164 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":156,"skipped":2645,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:09.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:46:10.024: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:10.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2074" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":158,"skipped":2650,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:10.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:25.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6972" for this suite.
STEP: Destroying namespace "nsdeletetest-3624" for this suite.
Jun 11 11:46:25.586: INFO: Namespace nsdeletetest-3624 was already deleted
STEP: Destroying namespace "nsdeletetest-2837" for this suite.

• [SLOW TEST:15.318 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":159,"skipped":2660,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:25.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-a47e50bc-0116-4fc8-85c5-c6461825ecb4
STEP: Creating a pod to test consume configMaps
Jun 11 11:46:25.699: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1" in namespace "configmap-4331" to be "Succeeded or Failed"
Jun 11 11:46:25.721: INFO: Pod "pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.403494ms
Jun 11 11:46:27.725: INFO: Pod "pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025605701s
Jun 11 11:46:29.729: INFO: Pod "pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030032928s
STEP: Saw pod success
Jun 11 11:46:29.730: INFO: Pod "pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1" satisfied condition "Succeeded or Failed"
Jun 11 11:46:29.733: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1 container configmap-volume-test: 
STEP: delete the pod
Jun 11 11:46:29.771: INFO: Waiting for pod pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1 to disappear
Jun 11 11:46:29.811: INFO: Pod pod-configmaps-0fe3fcda-12d1-4a85-9d96-5253228295e1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:29.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4331" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2667,"failed":0}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:29.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5431" for this suite.
STEP: Destroying namespace "nsdeletetest-9664" for this suite.
Jun 11 11:46:36.269: INFO: Namespace nsdeletetest-9664 was already deleted
STEP: Destroying namespace "nsdeletetest-2218" for this suite.

• [SLOW TEST:6.454 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":161,"skipped":2668,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:36.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3967
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 11 11:46:36.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 11 11:46:36.450: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:46:38.566: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 11:46:40.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:42.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:44.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:46.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:48.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:50.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:52.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 11:46:54.454: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 11 11:46:54.459: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 11 11:46:58.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.67:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3967 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:46:58.554: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:46:58.592543       7 log.go:172] (0xc001e9a840) (0xc002acee60) Create stream
I0611 11:46:58.592579       7 log.go:172] (0xc001e9a840) (0xc002acee60) Stream added, broadcasting: 1
I0611 11:46:58.594380       7 log.go:172] (0xc001e9a840) Reply frame received for 1
I0611 11:46:58.594411       7 log.go:172] (0xc001e9a840) (0xc00174b720) Create stream
I0611 11:46:58.594424       7 log.go:172] (0xc001e9a840) (0xc00174b720) Stream added, broadcasting: 3
I0611 11:46:58.595597       7 log.go:172] (0xc001e9a840) Reply frame received for 3
I0611 11:46:58.595649       7 log.go:172] (0xc001e9a840) (0xc002acf040) Create stream
I0611 11:46:58.595679       7 log.go:172] (0xc001e9a840) (0xc002acf040) Stream added, broadcasting: 5
I0611 11:46:58.596534       7 log.go:172] (0xc001e9a840) Reply frame received for 5
I0611 11:46:58.738958       7 log.go:172] (0xc001e9a840) Data frame received for 3
I0611 11:46:58.739023       7 log.go:172] (0xc00174b720) (3) Data frame handling
I0611 11:46:58.739037       7 log.go:172] (0xc00174b720) (3) Data frame sent
I0611 11:46:58.739042       7 log.go:172] (0xc001e9a840) Data frame received for 3
I0611 11:46:58.739047       7 log.go:172] (0xc00174b720) (3) Data frame handling
I0611 11:46:58.739135       7 log.go:172] (0xc001e9a840) Data frame received for 5
I0611 11:46:58.739159       7 log.go:172] (0xc002acf040) (5) Data frame handling
I0611 11:46:58.741328       7 log.go:172] (0xc001e9a840) Data frame received for 1
I0611 11:46:58.741349       7 log.go:172] (0xc002acee60) (1) Data frame handling
I0611 11:46:58.741361       7 log.go:172] (0xc002acee60) (1) Data frame sent
I0611 11:46:58.741379       7 log.go:172] (0xc001e9a840) (0xc002acee60) Stream removed, broadcasting: 1
I0611 11:46:58.741393       7 log.go:172] (0xc001e9a840) Go away received
I0611 11:46:58.741573       7 log.go:172] (0xc001e9a840) (0xc002acee60) Stream removed, broadcasting: 1
I0611 11:46:58.741597       7 log.go:172] (0xc001e9a840) (0xc00174b720) Stream removed, broadcasting: 3
I0611 11:46:58.741611       7 log.go:172] (0xc001e9a840) (0xc002acf040) Stream removed, broadcasting: 5
Jun 11 11:46:58.741: INFO: Found all expected endpoints: [netserver-0]
Jun 11 11:46:58.745: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3967 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 11:46:58.745: INFO: >>> kubeConfig: /root/.kube/config
I0611 11:46:58.781925       7 log.go:172] (0xc001f00420) (0xc0017b2960) Create stream
I0611 11:46:58.782014       7 log.go:172] (0xc001f00420) (0xc0017b2960) Stream added, broadcasting: 1
I0611 11:46:58.783905       7 log.go:172] (0xc001f00420) Reply frame received for 1
I0611 11:46:58.783940       7 log.go:172] (0xc001f00420) (0xc0017b2aa0) Create stream
I0611 11:46:58.783953       7 log.go:172] (0xc001f00420) (0xc0017b2aa0) Stream added, broadcasting: 3
I0611 11:46:58.784888       7 log.go:172] (0xc001f00420) Reply frame received for 3
I0611 11:46:58.784934       7 log.go:172] (0xc001f00420) (0xc001ad6140) Create stream
I0611 11:46:58.784947       7 log.go:172] (0xc001f00420) (0xc001ad6140) Stream added, broadcasting: 5
I0611 11:46:58.786004       7 log.go:172] (0xc001f00420) Reply frame received for 5
I0611 11:46:58.848620       7 log.go:172] (0xc001f00420) Data frame received for 3
I0611 11:46:58.848657       7 log.go:172] (0xc0017b2aa0) (3) Data frame handling
I0611 11:46:58.848685       7 log.go:172] (0xc0017b2aa0) (3) Data frame sent
I0611 11:46:58.848700       7 log.go:172] (0xc001f00420) Data frame received for 3
I0611 11:46:58.848712       7 log.go:172] (0xc0017b2aa0) (3) Data frame handling
I0611 11:46:58.848976       7 log.go:172] (0xc001f00420) Data frame received for 5
I0611 11:46:58.848999       7 log.go:172] (0xc001ad6140) (5) Data frame handling
I0611 11:46:58.850691       7 log.go:172] (0xc001f00420) Data frame received for 1
I0611 11:46:58.850726       7 log.go:172] (0xc0017b2960) (1) Data frame handling
I0611 11:46:58.850751       7 log.go:172] (0xc0017b2960) (1) Data frame sent
I0611 11:46:58.850784       7 log.go:172] (0xc001f00420) (0xc0017b2960) Stream removed, broadcasting: 1
I0611 11:46:58.850822       7 log.go:172] (0xc001f00420) Go away received
I0611 11:46:58.850916       7 log.go:172] (0xc001f00420) (0xc0017b2960) Stream removed, broadcasting: 1
I0611 11:46:58.850969       7 log.go:172] (0xc001f00420) (0xc0017b2aa0) Stream removed, broadcasting: 3
I0611 11:46:58.851014       7 log.go:172] (0xc001f00420) (0xc001ad6140) Stream removed, broadcasting: 5
Jun 11 11:46:58.851: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:46:58.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3967" for this suite.

• [SLOW TEST:22.584 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:46:58.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40
Jun 11 11:46:59.013: INFO: Pod name my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40: Found 0 pods out of 1
Jun 11 11:47:04.018: INFO: Pod name my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40: Found 1 pods out of 1
Jun 11 11:47:04.018: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40" are running
Jun 11 11:47:04.023: INFO: Pod "my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40-xm854" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:46:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:46:59 +0000 UTC Reason: Message:}])
Jun 11 11:47:04.023: INFO: Trying to dial the pod
Jun 11 11:47:09.034: INFO: Controller my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40: Got expected result from replica 1 [my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40-xm854]: "my-hostname-basic-37e7f608-fe60-4cf7-a6a7-be19aeffee40-xm854", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:47:09.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2108" for this suite.

• [SLOW TEST:10.182 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":163,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:47:09.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jun 11 11:47:09.127: INFO: >>> kubeConfig: /root/.kube/config
Jun 11 11:47:12.082: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:47:22.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7328" for this suite.

• [SLOW TEST:13.823 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":164,"skipped":2745,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:47:22.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5997.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5997.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5997.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5997.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:47:29.032: INFO: DNS probes using dns-5997/dns-test-3d5bc9f0-3d78-4f96-bb2f-df1805ab9b9a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:47:29.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5997" for this suite.

• [SLOW TEST:6.657 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":165,"skipped":2775,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:47:29.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:47:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8555" for this suite.

• [SLOW TEST:16.471 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":166,"skipped":2778,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:47:45.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:47:46.089: INFO: Creating ReplicaSet my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd
Jun 11 11:47:46.121: INFO: Pod name my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd: Found 0 pods out of 1
Jun 11 11:47:51.138: INFO: Pod name my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd: Found 1 pods out of 1
Jun 11 11:47:51.138: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd" is running
Jun 11 11:47:51.147: INFO: Pod "my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd-qmx5q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-11 11:47:46 +0000 UTC Reason: Message:}])
Jun 11 11:47:51.147: INFO: Trying to dial the pod
Jun 11 11:47:56.159: INFO: Controller my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd: Got expected result from replica 1 [my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd-qmx5q]: "my-hostname-basic-98df8fd3-483a-479a-a6a9-0a492ed70ebd-qmx5q", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:47:56.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9913" for this suite.

• [SLOW TEST:10.199 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":167,"skipped":2782,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:47:56.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:48:03.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7443" for this suite.

• [SLOW TEST:7.119 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":168,"skipped":2793,"failed":0}
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:48:03.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4431.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4431.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4431.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 11 11:48:11.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.482: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.486: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.489: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.499: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.502: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.505: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.509: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:11.516: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:17.759: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.840: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.921: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.924: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.940: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.943: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.946: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.948: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:17.954: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:21.522: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.526: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.530: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.533: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.543: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.546: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.548: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.552: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:21.558: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:26.521: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.530: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.535: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.538: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.547: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.549: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.552: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.555: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:26.562: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:31.521: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.525: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.529: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.534: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.544: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.548: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.551: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.554: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:31.560: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:36.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.539: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.548: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.553: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.563: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.566: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.568: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.571: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local from pod dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88: the server could not find the requested resource (get pods dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88)
Jun 11 11:48:36.577: INFO: Lookups using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4431.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4431.svc.cluster.local jessie_udp@dns-test-service-2.dns-4431.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4431.svc.cluster.local]

Jun 11 11:48:41.563: INFO: DNS probes using dns-4431/dns-test-7dd1e6ac-c9cf-462c-b266-6ba904af9d88 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:48:41.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4431" for this suite.

• [SLOW TEST:38.605 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":169,"skipped":2793,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:48:41.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:48:42.579: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5b951db1-1d0d-4151-9649-ab2cb3d29c82", Controller:(*bool)(0xc00551c13a), BlockOwnerDeletion:(*bool)(0xc00551c13b)}}
Jun 11 11:48:42.612: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"168e7643-8b4e-4b52-ad6e-327453963717", Controller:(*bool)(0xc0054b2ff2), BlockOwnerDeletion:(*bool)(0xc0054b2ff3)}}
Jun 11 11:48:42.651: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"76b75de6-895d-47c8-b46f-617ea877bc5b", Controller:(*bool)(0xc0054fc9e2), BlockOwnerDeletion:(*bool)(0xc0054fc9e3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:48:47.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6100" for this suite.

• [SLOW TEST:5.945 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":170,"skipped":2812,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:48:47.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Jun 11 11:48:47.991: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix930368102/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:48:48.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1621" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":171,"skipped":2813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:48:48.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jun 11 11:48:48.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jun 11 11:48:57.934: INFO: >>> kubeConfig: /root/.kube/config
Jun 11 11:49:00.899: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7719" for this suite.

• [SLOW TEST:23.484 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":172,"skipped":2848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:11.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-2645b018-47f2-4752-a415-66ffe80f5604
STEP: Creating a pod to test consume secrets
Jun 11 11:49:11.706: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711" in namespace "projected-1331" to be "Succeeded or Failed"
Jun 11 11:49:11.709: INFO: Pod "pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241992ms
Jun 11 11:49:13.802: INFO: Pod "pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095678902s
Jun 11 11:49:15.805: INFO: Pod "pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098926277s
STEP: Saw pod success
Jun 11 11:49:15.805: INFO: Pod "pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711" satisfied condition "Succeeded or Failed"
Jun 11 11:49:15.920: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711 container projected-secret-volume-test: 
STEP: delete the pod
Jun 11 11:49:16.055: INFO: Waiting for pod pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711 to disappear
Jun 11 11:49:16.082: INFO: Pod pod-projected-secrets-d246d22a-754b-41db-92b0-67dfe8576711 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1331" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2871,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:16.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-690260fe-a351-44d2-8cce-76e9c1607147
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-690260fe-a351-44d2-8cce-76e9c1607147
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:22.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3720" for this suite.

• [SLOW TEST:6.265 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2878,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:22.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:49:22.435: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jun 11 11:49:22.468: INFO: Pod name sample-pod: Found 0 pods out of 1
Jun 11 11:49:27.474: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 11 11:49:27.474: INFO: Creating deployment "test-rolling-update-deployment"
Jun 11 11:49:27.478: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jun 11 11:49:27.495: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jun 11 11:49:29.517: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jun 11 11:49:29.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:49:31.523: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jun 11 11:49:31.532: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8551 /apis/apps/v1/namespaces/deployment-8551/deployments/test-rolling-update-deployment e6d19c55-f7e6-46a3-89fb-3c768e9460b2 11522688 1 2020-06-11 11:49:27 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-06-11 11:49:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:49:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033d1338  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-11 11:49:27 +0000 UTC,LastTransitionTime:2020-06-11 11:49:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-06-11 11:49:30 +0000 UTC,LastTransitionTime:2020-06-11 11:49:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jun 11 11:49:31.535: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-8551 /apis/apps/v1/namespaces/deployment-8551/replicasets/test-rolling-update-deployment-59d5cb45c7 6bf25c0c-1d22-443e-bf79-5e1b61115e33 11522677 1 2020-06-11 11:49:27 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e6d19c55-f7e6-46a3-89fb-3c768e9460b2 0xc0033d1b07 0xc0033d1b08}] []  [{kube-controller-manager Update apps/v1 2020-06-11 11:49:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 54 100 49 57 99 53 53 45 102 55 101 54 45 52 54 97 51 45 56 57 102 98 45 51 99 55 54 56 101 57 52 54 48 98 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033d1bc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:49:31.535: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jun 11 11:49:31.535: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8551 /apis/apps/v1/namespaces/deployment-8551/replicasets/test-rolling-update-controller cd472056-c432-4fa6-9b65-eed8bc162e22 11522687 2 2020-06-11 11:49:22 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e6d19c55-f7e6-46a3-89fb-3c768e9460b2 0xc0033d1967 0xc0033d1968}] []  [{e2e.test Update apps/v1 2020-06-11 11:49:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-06-11 11:49:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 54 100 49 57 99 53 53 45 102 55 101 54 45 52 54 97 51 45 56 57 102 98 45 51 99 55 54 56 101 57 52 54 48 98 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0033d1a68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jun 11 11:49:31.538: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-cxvsw" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-cxvsw test-rolling-update-deployment-59d5cb45c7- deployment-8551 /api/v1/namespaces/deployment-8551/pods/test-rolling-update-deployment-59d5cb45c7-cxvsw 8a2a9027-e2fd-4ce8-9e6c-b967dbab0d13 11522676 0 2020-06-11 11:49:27 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 6bf25c0c-1d22-443e-bf79-5e1b61115e33 0xc0032681d7 0xc0032681d8}] []  [{kube-controller-manager Update v1 2020-06-11 11:49:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 102 50 53 99 48 99 45 49 100 50 50 45 52 52 51 101 45 98 102 55 57 45 53 101 49 98 54 49 49 49 53 101 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-06-11 11:49:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gkw8v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gkw8v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gkw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:49:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:49:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:49:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-11 11:49:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.10,StartTime:2020-06-11 11:49:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-11 11:49:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://44aeb469be7c9799f836e11701cf2bd7c570417c76b26db33865ec10c32ac718,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:31.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8551" for this suite.

• [SLOW TEST:9.174 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":175,"skipped":2914,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:31.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 11 11:49:31.888: INFO: Waiting up to 5m0s for pod "pod-076c214c-32cb-48ff-a05f-97d259270509" in namespace "emptydir-2297" to be "Succeeded or Failed"
Jun 11 11:49:31.896: INFO: Pod "pod-076c214c-32cb-48ff-a05f-97d259270509": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203574ms
Jun 11 11:49:33.902: INFO: Pod "pod-076c214c-32cb-48ff-a05f-97d259270509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014783691s
Jun 11 11:49:36.016: INFO: Pod "pod-076c214c-32cb-48ff-a05f-97d259270509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128698073s
STEP: Saw pod success
Jun 11 11:49:36.016: INFO: Pod "pod-076c214c-32cb-48ff-a05f-97d259270509" satisfied condition "Succeeded or Failed"
Jun 11 11:49:36.020: INFO: Trying to get logs from node kali-worker pod pod-076c214c-32cb-48ff-a05f-97d259270509 container test-container: 
STEP: delete the pod
Jun 11 11:49:36.068: INFO: Waiting for pod pod-076c214c-32cb-48ff-a05f-97d259270509 to disappear
Jun 11 11:49:36.090: INFO: Pod pod-076c214c-32cb-48ff-a05f-97d259270509 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:36.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2297" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2914,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:36.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:43.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2965" for this suite.

• [SLOW TEST:7.272 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":177,"skipped":2938,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:43.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:49:43.429: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-192" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":178,"skipped":2949,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:44.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:49:45.512: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:49:47.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472985, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472985, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472985, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727472985, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:49:50.675: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:50.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4971" for this suite.
STEP: Destroying namespace "webhook-4971-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.186 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":179,"skipped":3001,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:50.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-1dcadd5a-d3e9-4866-83ce-72bcae126786
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:49:57.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7202" for this suite.

• [SLOW TEST:6.529 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3005,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:49:57.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:01.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9412" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:01.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jun 11 11:50:11.647: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:11.658: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:13.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:13.663: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:15.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:15.664: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:17.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:17.663: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:19.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:19.663: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:21.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:21.664: INFO: Pod pod-with-prestop-http-hook still exists
Jun 11 11:50:23.659: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jun 11 11:50:23.663: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:23.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5359" for this suite.

• [SLOW TEST:22.173 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3039,"failed":0}
SSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:23.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Jun 11 11:50:23.759: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Jun 11 11:50:23.763: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jun 11 11:50:23.763: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Jun 11 11:50:23.778: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jun 11 11:50:23.778: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Jun 11 11:50:23.837: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Jun 11 11:50:23.838: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Jun 11 11:50:31.123: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:31.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-8" for this suite.

• [SLOW TEST:7.509 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":183,"skipped":3044,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:31.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jun 11 11:50:31.300: INFO: Waiting up to 5m0s for pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a" in namespace "downward-api-6041" to be "Succeeded or Failed"
Jun 11 11:50:31.325: INFO: Pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.8094ms
Jun 11 11:50:33.460: INFO: Pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159658269s
Jun 11 11:50:35.464: INFO: Pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163717424s
Jun 11 11:50:37.467: INFO: Pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166945745s
STEP: Saw pod success
Jun 11 11:50:37.467: INFO: Pod "downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a" satisfied condition "Succeeded or Failed"
Jun 11 11:50:37.469: INFO: Trying to get logs from node kali-worker2 pod downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a container dapi-container: 
STEP: delete the pod
Jun 11 11:50:37.516: INFO: Waiting for pod downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a to disappear
Jun 11 11:50:37.645: INFO: Pod downward-api-e8763bb7-da0c-43b3-acf6-21ef5962061a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:37.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6041" for this suite.

• [SLOW TEST:6.505 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3061,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:37.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jun 11 11:50:38.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6785'
Jun 11 11:50:39.318: INFO: stderr: ""
Jun 11 11:50:39.318: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jun 11 11:50:40.323: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:40.323: INFO: Found 0 / 1
Jun 11 11:50:41.322: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:41.322: INFO: Found 0 / 1
Jun 11 11:50:42.322: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:42.322: INFO: Found 0 / 1
Jun 11 11:50:43.419: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:43.419: INFO: Found 0 / 1
Jun 11 11:50:44.324: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:44.324: INFO: Found 1 / 1
Jun 11 11:50:44.324: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jun 11 11:50:44.328: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:44.328: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 11 11:50:44.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-5lwcg --namespace=kubectl-6785 -p {"metadata":{"annotations":{"x":"y"}}}'
Jun 11 11:50:44.427: INFO: stderr: ""
Jun 11 11:50:44.427: INFO: stdout: "pod/agnhost-master-5lwcg patched\n"
STEP: checking annotations
Jun 11 11:50:44.676: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 11:50:44.676: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:44.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6785" for this suite.

• [SLOW TEST:6.972 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":185,"skipped":3076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:44.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Jun 11 11:50:44.840: INFO: Waiting up to 5m0s for pod "client-containers-a2da3483-945b-457d-af56-1e727f56f530" in namespace "containers-6778" to be "Succeeded or Failed"
Jun 11 11:50:44.912: INFO: Pod "client-containers-a2da3483-945b-457d-af56-1e727f56f530": Phase="Pending", Reason="", readiness=false. Elapsed: 71.601597ms
Jun 11 11:50:46.963: INFO: Pod "client-containers-a2da3483-945b-457d-af56-1e727f56f530": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122439898s
Jun 11 11:50:48.967: INFO: Pod "client-containers-a2da3483-945b-457d-af56-1e727f56f530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126640281s
STEP: Saw pod success
Jun 11 11:50:48.967: INFO: Pod "client-containers-a2da3483-945b-457d-af56-1e727f56f530" satisfied condition "Succeeded or Failed"
Jun 11 11:50:48.970: INFO: Trying to get logs from node kali-worker pod client-containers-a2da3483-945b-457d-af56-1e727f56f530 container test-container: 
STEP: delete the pod
Jun 11 11:50:49.013: INFO: Waiting for pod client-containers-a2da3483-945b-457d-af56-1e727f56f530 to disappear
Jun 11 11:50:49.052: INFO: Pod client-containers-a2da3483-945b-457d-af56-1e727f56f530 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:49.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6778" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3100,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:49.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-9fb61627-fff2-475c-8647-30ec2e113fb1
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:49.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9530" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":187,"skipped":3101,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:49.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Jun 11 11:50:53.559: INFO: Pod pod-hostip-88e6d8af-43cb-47de-95d9-021b0cd391d2 has hostIP: 172.17.0.15
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:50:53.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1168" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3116,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:50:53.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-e771c810-f808-40b4-b8f0-6f59dcc13563
STEP: Creating a pod to test consume secrets
Jun 11 11:50:53.706: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97" in namespace "projected-1929" to be "Succeeded or Failed"
Jun 11 11:50:53.724: INFO: Pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1151ms
Jun 11 11:50:55.728: INFO: Pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021833428s
Jun 11 11:50:57.754: INFO: Pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97": Phase="Running", Reason="", readiness=true. Elapsed: 4.047715737s
Jun 11 11:50:59.757: INFO: Pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051002683s
STEP: Saw pod success
Jun 11 11:50:59.757: INFO: Pod "pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97" satisfied condition "Succeeded or Failed"
Jun 11 11:50:59.759: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97 container projected-secret-volume-test: 
STEP: delete the pod
Jun 11 11:50:59.924: INFO: Waiting for pod pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97 to disappear
Jun 11 11:51:00.029: INFO: Pod pod-projected-secrets-adbd6057-fa3d-4ffa-81d0-65ab52147a97 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:00.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1929" for this suite.

• [SLOW TEST:6.496 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3119,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:00.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:51:00.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jun 11 11:51:03.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 create -f -'
Jun 11 11:51:18.381: INFO: stderr: ""
Jun 11 11:51:18.381: INFO: stdout: "e2e-test-crd-publish-openapi-9393-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jun 11 11:51:18.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 delete e2e-test-crd-publish-openapi-9393-crds test-foo'
Jun 11 11:51:18.498: INFO: stderr: ""
Jun 11 11:51:18.498: INFO: stdout: "e2e-test-crd-publish-openapi-9393-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jun 11 11:51:18.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 apply -f -'
Jun 11 11:51:18.797: INFO: stderr: ""
Jun 11 11:51:18.797: INFO: stdout: "e2e-test-crd-publish-openapi-9393-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jun 11 11:51:18.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 delete e2e-test-crd-publish-openapi-9393-crds test-foo'
Jun 11 11:51:18.912: INFO: stderr: ""
Jun 11 11:51:18.912: INFO: stdout: "e2e-test-crd-publish-openapi-9393-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jun 11 11:51:18.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 create -f -'
Jun 11 11:51:19.165: INFO: rc: 1
Jun 11 11:51:19.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 apply -f -'
Jun 11 11:51:20.066: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jun 11 11:51:20.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 create -f -'
Jun 11 11:51:20.315: INFO: rc: 1
Jun 11 11:51:20.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 apply -f -'
Jun 11 11:51:20.574: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jun 11 11:51:20.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9393-crds'
Jun 11 11:51:21.397: INFO: stderr: ""
Jun 11 11:51:21.397: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9393-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jun 11 11:51:21.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9393-crds.metadata'
Jun 11 11:51:21.672: INFO: stderr: ""
Jun 11 11:51:21.672: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9393-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jun 11 11:51:21.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9393-crds.spec'
Jun 11 11:51:22.989: INFO: stderr: ""
Jun 11 11:51:22.989: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9393-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun 11 11:51:22.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9393-crds.spec.bars'
Jun 11 11:51:23.241: INFO: stderr: ""
Jun 11 11:51:23.241: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9393-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun 11 11:51:23.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9393-crds.spec.bars2'
Jun 11 11:51:24.519: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:27.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8199" for this suite.

• [SLOW TEST:27.360 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":190,"skipped":3129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:27.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jun 11 11:51:32.558: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2870" for this suite.

• [SLOW TEST:5.271 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":191,"skipped":3168,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:32.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Jun 11 11:51:32.781: INFO: Waiting up to 5m0s for pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8" in namespace "containers-6158" to be "Succeeded or Failed"
Jun 11 11:51:32.839: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8": Phase="Pending", Reason="", readiness=false. Elapsed: 58.577513ms
Jun 11 11:51:35.071: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290384567s
Jun 11 11:51:37.402: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.621559564s
Jun 11 11:51:39.406: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625439157s
Jun 11 11:51:41.410: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.629374136s
STEP: Saw pod success
Jun 11 11:51:41.410: INFO: Pod "client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8" satisfied condition "Succeeded or Failed"
Jun 11 11:51:41.413: INFO: Trying to get logs from node kali-worker2 pod client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8 container test-container: 
STEP: delete the pod
Jun 11 11:51:41.432: INFO: Waiting for pod client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8 to disappear
Jun 11 11:51:41.436: INFO: Pod client-containers-143652c2-8dee-47e5-bc8c-739a73b685f8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:41.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6158" for this suite.

• [SLOW TEST:8.749 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3170,"failed":0}
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:41.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-3478
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3478
STEP: Deleting pre-stop pod
Jun 11 11:51:56.629: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:56.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3478" for this suite.

• [SLOW TEST:15.222 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":193,"skipped":3175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:56.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:51:56.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3153" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":194,"skipped":3220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:51:56.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jun 11 11:51:57.087: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 11 11:51:57.098: INFO: Waiting for terminating namespaces to be deleted...
Jun 11 11:51:57.101: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jun 11 11:51:57.108: INFO: tester from prestop-3478 started at 2020-06-11 11:51:45 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.108: INFO: 	Container tester ready: true, restart count 0
Jun 11 11:51:57.108: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.108: INFO: 	Container kindnet-cni ready: true, restart count 3
Jun 11 11:51:57.108: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.108: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 11 11:51:57.108: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jun 11 11:51:57.139: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.139: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 11 11:51:57.139: INFO: server from prestop-3478 started at 2020-06-11 11:51:41 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.139: INFO: 	Container server ready: true, restart count 0
Jun 11 11:51:57.139: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 11:51:57.139: INFO: 	Container kindnet-cni ready: true, restart count 2
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5fadcb93-4525-4796-a570-05f75304843e 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-5fadcb93-4525-4796-a570-05f75304843e off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5fadcb93-4525-4796-a570-05f75304843e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2357" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:308.615 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":195,"skipped":3245,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:05.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:57:07.624: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:57:09.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473427, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473427, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473427, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473427, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:57:12.755: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:57:12.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1512-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:13.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-45" for this suite.
STEP: Destroying namespace "webhook-45-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.541 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":196,"skipped":3258,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:14.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7a3bdbe5-53d5-4325-a899-18ebc787e146
STEP: Creating a pod to test consume configMaps
Jun 11 11:57:14.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa" in namespace "projected-9518" to be "Succeeded or Failed"
Jun 11 11:57:14.211: INFO: Pod "pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa": Phase="Pending", Reason="", readiness=false. Elapsed: 32.636157ms
Jun 11 11:57:16.278: INFO: Pod "pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099599327s
Jun 11 11:57:18.282: INFO: Pod "pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103646493s
STEP: Saw pod success
Jun 11 11:57:18.282: INFO: Pod "pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa" satisfied condition "Succeeded or Failed"
Jun 11 11:57:18.285: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa container projected-configmap-volume-test: 
STEP: delete the pod
Jun 11 11:57:18.369: INFO: Waiting for pod pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa to disappear
Jun 11 11:57:18.415: INFO: Pod pod-projected-configmaps-5927f232-b184-4911-862e-e571a4b688aa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:18.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9518" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3261,"failed":0}
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:18.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Jun 11 11:57:18.597: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9691" to be "Succeeded or Failed"
Jun 11 11:57:18.611: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.080134ms
Jun 11 11:57:20.883: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286073354s
Jun 11 11:57:22.887: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290312031s
Jun 11 11:57:24.890: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293821694s
STEP: Saw pod success
Jun 11 11:57:24.890: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 11 11:57:24.893: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jun 11 11:57:25.170: INFO: Waiting for pod pod-host-path-test to disappear
Jun 11 11:57:25.175: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:25.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9691" for this suite.

• [SLOW TEST:6.761 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3268,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:25.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jun 11 11:57:29.823: INFO: Successfully updated pod "labelsupdateef9b7eba-be11-410e-addd-7e4bffc9b9d2"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:33.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6988" for this suite.

• [SLOW TEST:8.707 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3270,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:33.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-295826fe-a88e-42e9-ba26-97252dc7f588
STEP: Creating a pod to test consume secrets
Jun 11 11:57:36.036: INFO: Waiting up to 5m0s for pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282" in namespace "secrets-6900" to be "Succeeded or Failed"
Jun 11 11:57:36.063: INFO: Pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282": Phase="Pending", Reason="", readiness=false. Elapsed: 26.246079ms
Jun 11 11:57:38.104: INFO: Pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067373394s
Jun 11 11:57:40.206: INFO: Pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169215497s
Jun 11 11:57:42.356: INFO: Pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.319372342s
STEP: Saw pod success
Jun 11 11:57:42.356: INFO: Pod "pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282" satisfied condition "Succeeded or Failed"
Jun 11 11:57:42.358: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:57:42.717: INFO: Waiting for pod pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282 to disappear
Jun 11 11:57:42.726: INFO: Pod pod-secrets-e8557880-da87-40ac-b76c-b857f3dba282 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:42.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6900" for this suite.

• [SLOW TEST:8.890 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3289,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:42.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 11 11:57:42.968: INFO: Waiting up to 5m0s for pod "pod-a908f672-e9d6-42f6-b804-8df45d0dbe94" in namespace "emptydir-964" to be "Succeeded or Failed"
Jun 11 11:57:42.984: INFO: Pod "pod-a908f672-e9d6-42f6-b804-8df45d0dbe94": Phase="Pending", Reason="", readiness=false. Elapsed: 15.15287ms
Jun 11 11:57:45.018: INFO: Pod "pod-a908f672-e9d6-42f6-b804-8df45d0dbe94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049826775s
Jun 11 11:57:47.023: INFO: Pod "pod-a908f672-e9d6-42f6-b804-8df45d0dbe94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054252241s
STEP: Saw pod success
Jun 11 11:57:47.023: INFO: Pod "pod-a908f672-e9d6-42f6-b804-8df45d0dbe94" satisfied condition "Succeeded or Failed"
Jun 11 11:57:47.026: INFO: Trying to get logs from node kali-worker pod pod-a908f672-e9d6-42f6-b804-8df45d0dbe94 container test-container: 
STEP: delete the pod
Jun 11 11:57:47.170: INFO: Waiting for pod pod-a908f672-e9d6-42f6-b804-8df45d0dbe94 to disappear
Jun 11 11:57:47.172: INFO: Pod pod-a908f672-e9d6-42f6-b804-8df45d0dbe94 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:47.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-964" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3299,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:47.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-65a0327d-b1af-42d3-8d56-be2612dd1129
STEP: Creating a pod to test consume secrets
Jun 11 11:57:47.245: INFO: Waiting up to 5m0s for pod "pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8" in namespace "secrets-4378" to be "Succeeded or Failed"
Jun 11 11:57:47.295: INFO: Pod "pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.990487ms
Jun 11 11:57:49.345: INFO: Pod "pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099666423s
Jun 11 11:57:51.348: INFO: Pod "pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103470491s
STEP: Saw pod success
Jun 11 11:57:51.348: INFO: Pod "pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8" satisfied condition "Succeeded or Failed"
Jun 11 11:57:51.351: INFO: Trying to get logs from node kali-worker pod pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:57:51.433: INFO: Waiting for pod pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8 to disappear
Jun 11 11:57:51.499: INFO: Pod pod-secrets-fc0a469d-7857-4780-b400-24791dc30dc8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:51.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4378" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3316,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:51.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-370cff0f-5732-4aba-829a-795d772fd9b4
STEP: Creating a pod to test consume configMaps
Jun 11 11:57:51.662: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545" in namespace "configmap-7690" to be "Succeeded or Failed"
Jun 11 11:57:51.667: INFO: Pod "pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545": Phase="Pending", Reason="", readiness=false. Elapsed: 5.340746ms
Jun 11 11:57:53.867: INFO: Pod "pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204603664s
Jun 11 11:57:55.876: INFO: Pod "pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214235532s
STEP: Saw pod success
Jun 11 11:57:55.876: INFO: Pod "pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545" satisfied condition "Succeeded or Failed"
Jun 11 11:57:55.878: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545 container configmap-volume-test: 
STEP: delete the pod
Jun 11 11:57:55.942: INFO: Waiting for pod pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545 to disappear
Jun 11 11:57:55.961: INFO: Pod pod-configmaps-f3d241dc-36a4-4a60-8550-7e5f257f1545 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:57:55.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7690" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3325,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:57:55.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:57:56.034: INFO: Create a RollingUpdate DaemonSet
Jun 11 11:57:56.037: INFO: Check that daemon pods launch on every node of the cluster
Jun 11 11:57:56.081: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:57:56.084: INFO: Number of nodes with available pods: 0
Jun 11 11:57:56.084: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:57:57.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:57:57.092: INFO: Number of nodes with available pods: 0
Jun 11 11:57:57.092: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:57:58.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:57:58.092: INFO: Number of nodes with available pods: 0
Jun 11 11:57:58.092: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:57:59.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:57:59.092: INFO: Number of nodes with available pods: 0
Jun 11 11:57:59.092: INFO: Node kali-worker is running more than one daemon pod
Jun 11 11:58:00.346: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:58:00.394: INFO: Number of nodes with available pods: 1
Jun 11 11:58:00.394: INFO: Node kali-worker2 is running more than one daemon pod
Jun 11 11:58:01.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:58:01.104: INFO: Number of nodes with available pods: 2
Jun 11 11:58:01.104: INFO: Number of running nodes: 2, number of available pods: 2
Jun 11 11:58:01.104: INFO: Update the DaemonSet to trigger a rollout
Jun 11 11:58:01.111: INFO: Updating DaemonSet daemon-set
Jun 11 11:58:14.164: INFO: Roll back the DaemonSet before rollout is complete
Jun 11 11:58:14.171: INFO: Updating DaemonSet daemon-set
Jun 11 11:58:14.171: INFO: Make sure DaemonSet rollback is complete
Jun 11 11:58:14.183: INFO: Wrong image for pod: daemon-set-w9wmr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jun 11 11:58:14.183: INFO: Pod daemon-set-w9wmr is not available
Jun 11 11:58:14.211: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:58:15.216: INFO: Wrong image for pod: daemon-set-w9wmr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jun 11 11:58:15.216: INFO: Pod daemon-set-w9wmr is not available
Jun 11 11:58:15.219: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:58:16.338: INFO: Wrong image for pod: daemon-set-w9wmr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jun 11 11:58:16.339: INFO: Pod daemon-set-w9wmr is not available
Jun 11 11:58:16.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jun 11 11:58:17.279: INFO: Pod daemon-set-hzz4b is not available
Jun 11 11:58:17.282: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-37, will wait for the garbage collector to delete the pods
Jun 11 11:58:17.346: INFO: Deleting DaemonSet.extensions daemon-set took: 4.931776ms
Jun 11 11:58:17.646: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241437ms
Jun 11 11:58:23.469: INFO: Number of nodes with available pods: 0
Jun 11 11:58:23.469: INFO: Number of running nodes: 0, number of available pods: 0
Jun 11 11:58:23.472: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-37/daemonsets","resourceVersion":"11525166"},"items":null}

Jun 11 11:58:23.475: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-37/pods","resourceVersion":"11525166"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:23.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-37" for this suite.

• [SLOW TEST:27.522 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":204,"skipped":3339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:23.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:58:23.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b" in namespace "downward-api-5436" to be "Succeeded or Failed"
Jun 11 11:58:23.693: INFO: Pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.086298ms
Jun 11 11:58:25.698: INFO: Pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045901445s
Jun 11 11:58:27.702: INFO: Pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049503949s
Jun 11 11:58:29.709: INFO: Pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056597661s
STEP: Saw pod success
Jun 11 11:58:29.709: INFO: Pod "downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b" satisfied condition "Succeeded or Failed"
Jun 11 11:58:29.712: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b container client-container: 
STEP: delete the pod
Jun 11 11:58:29.741: INFO: Waiting for pod downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b to disappear
Jun 11 11:58:29.745: INFO: Pod downwardapi-volume-c06928c8-98f5-4289-a0c5-7355c7a1ea8b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:29.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5436" for this suite.

• [SLOW TEST:6.261 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3370,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:29.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:58:29.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2" in namespace "projected-3890" to be "Succeeded or Failed"
Jun 11 11:58:29.890: INFO: Pod "downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.305016ms
Jun 11 11:58:31.894: INFO: Pod "downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051001339s
Jun 11 11:58:33.900: INFO: Pod "downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056912565s
STEP: Saw pod success
Jun 11 11:58:33.900: INFO: Pod "downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2" satisfied condition "Succeeded or Failed"
Jun 11 11:58:33.903: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2 container client-container: 
STEP: delete the pod
Jun 11 11:58:33.942: INFO: Waiting for pod downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2 to disappear
Jun 11 11:58:33.956: INFO: Pod downwardapi-volume-82933d08-04d1-4070-84d2-4cf4cc7431c2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:33.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3890" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3416,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:33.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fef5dce5-c6b5-47c9-be79-96bb62ce1545
STEP: Creating a pod to test consume secrets
Jun 11 11:58:34.286: INFO: Waiting up to 5m0s for pod "pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233" in namespace "secrets-3460" to be "Succeeded or Failed"
Jun 11 11:58:34.310: INFO: Pod "pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233": Phase="Pending", Reason="", readiness=false. Elapsed: 23.924684ms
Jun 11 11:58:36.314: INFO: Pod "pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027954196s
Jun 11 11:58:38.318: INFO: Pod "pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032262342s
STEP: Saw pod success
Jun 11 11:58:38.318: INFO: Pod "pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233" satisfied condition "Succeeded or Failed"
Jun 11 11:58:38.322: INFO: Trying to get logs from node kali-worker pod pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:58:38.366: INFO: Waiting for pod pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233 to disappear
Jun 11 11:58:38.451: INFO: Pod pod-secrets-9cb6a1d4-388f-4bd4-8934-7ed6084f5233 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:38.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3460" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3438,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:38.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-b23b099b-5b1a-44f1-b7b6-0ada7d7cf873
STEP: Creating a pod to test consume configMaps
Jun 11 11:58:38.666: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af" in namespace "projected-2626" to be "Succeeded or Failed"
Jun 11 11:58:38.681: INFO: Pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af": Phase="Pending", Reason="", readiness=false. Elapsed: 15.591597ms
Jun 11 11:58:40.686: INFO: Pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019766815s
Jun 11 11:58:42.690: INFO: Pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023952685s
Jun 11 11:58:44.694: INFO: Pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02815477s
STEP: Saw pod success
Jun 11 11:58:44.694: INFO: Pod "pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af" satisfied condition "Succeeded or Failed"
Jun 11 11:58:44.698: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af container projected-configmap-volume-test: 
STEP: delete the pod
Jun 11 11:58:44.718: INFO: Waiting for pod pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af to disappear
Jun 11 11:58:44.722: INFO: Pod pod-projected-configmaps-70b59643-80bc-4e14-84dd-a6ddf62e62af no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2626" for this suite.

• [SLOW TEST:6.269 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3445,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:44.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jun 11 11:58:44.851: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-153 /api/v1/namespaces/watch-153/configmaps/e2e-watch-test-watch-closed 4629a9f4-bbd1-43fc-a345-79203df9241c 11525350 0 2020-06-11 11:58:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-11 11:58:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:58:44.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-153 /api/v1/namespaces/watch-153/configmaps/e2e-watch-test-watch-closed 4629a9f4-bbd1-43fc-a345-79203df9241c 11525351 0 2020-06-11 11:58:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-11 11:58:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jun 11 11:58:44.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-153 /api/v1/namespaces/watch-153/configmaps/e2e-watch-test-watch-closed 4629a9f4-bbd1-43fc-a345-79203df9241c 11525352 0 2020-06-11 11:58:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-11 11:58:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 11:58:44.862: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-153 /api/v1/namespaces/watch-153/configmaps/e2e-watch-test-watch-closed 4629a9f4-bbd1-43fc-a345-79203df9241c 11525353 0 2020-06-11 11:58:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-11 11:58:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:44.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-153" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":209,"skipped":3450,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:44.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 11:58:44.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a" in namespace "projected-3154" to be "Succeeded or Failed"
Jun 11 11:58:44.982: INFO: Pod "downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.871116ms
Jun 11 11:58:46.987: INFO: Pod "downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02126228s
Jun 11 11:58:48.991: INFO: Pod "downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025257135s
STEP: Saw pod success
Jun 11 11:58:48.991: INFO: Pod "downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a" satisfied condition "Succeeded or Failed"
Jun 11 11:58:48.993: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a container client-container: 
STEP: delete the pod
Jun 11 11:58:49.029: INFO: Waiting for pod downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a to disappear
Jun 11 11:58:49.052: INFO: Pod downwardapi-volume-f5066eda-8cbc-449e-b9a1-990b14677d8a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:49.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3154" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3460,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:49.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 11:58:53.455: INFO: Waiting up to 5m0s for pod "client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d" in namespace "pods-6679" to be "Succeeded or Failed"
Jun 11 11:58:53.465: INFO: Pod "client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319313ms
Jun 11 11:58:55.469: INFO: Pod "client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013791053s
Jun 11 11:58:57.472: INFO: Pod "client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0167704s
STEP: Saw pod success
Jun 11 11:58:57.472: INFO: Pod "client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d" satisfied condition "Succeeded or Failed"
Jun 11 11:58:57.474: INFO: Trying to get logs from node kali-worker2 pod client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d container env3cont: 
STEP: delete the pod
Jun 11 11:58:57.492: INFO: Waiting for pod client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d to disappear
Jun 11 11:58:57.508: INFO: Pod client-envvars-b2e05ce2-2818-4edd-ad95-2d73fd91a36d no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:58:57.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6679" for this suite.

• [SLOW TEST:8.455 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3485,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:58:57.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-0700fc88-5b14-4b97-b667-dcbaa790ebb9
STEP: Creating a pod to test consume secrets
Jun 11 11:59:00.797: INFO: Waiting up to 5m0s for pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837" in namespace "secrets-6343" to be "Succeeded or Failed"
Jun 11 11:59:00.980: INFO: Pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837": Phase="Pending", Reason="", readiness=false. Elapsed: 182.696937ms
Jun 11 11:59:03.063: INFO: Pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26564975s
Jun 11 11:59:05.164: INFO: Pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367013307s
Jun 11 11:59:07.169: INFO: Pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371522837s
STEP: Saw pod success
Jun 11 11:59:07.169: INFO: Pod "pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837" satisfied condition "Succeeded or Failed"
Jun 11 11:59:07.173: INFO: Trying to get logs from node kali-worker pod pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837 container secret-volume-test: 
STEP: delete the pod
Jun 11 11:59:07.205: INFO: Waiting for pod pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837 to disappear
Jun 11 11:59:07.221: INFO: Pod pod-secrets-16b49885-4c5e-40e0-81a3-1afba65f3837 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:07.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6343" for this suite.
STEP: Destroying namespace "secret-namespace-5183" for this suite.

• [SLOW TEST:9.717 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:07.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-1167/secret-test-acb34248-82be-431d-89ff-45d37bd2d680
STEP: Creating a pod to test consume secrets
Jun 11 11:59:07.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77" in namespace "secrets-1167" to be "Succeeded or Failed"
Jun 11 11:59:07.335: INFO: Pod "pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77": Phase="Pending", Reason="", readiness=false. Elapsed: 21.810847ms
Jun 11 11:59:09.340: INFO: Pod "pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026361145s
Jun 11 11:59:11.347: INFO: Pod "pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033922119s
STEP: Saw pod success
Jun 11 11:59:11.347: INFO: Pod "pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77" satisfied condition "Succeeded or Failed"
Jun 11 11:59:11.350: INFO: Trying to get logs from node kali-worker pod pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77 container env-test: 
STEP: delete the pod
Jun 11 11:59:11.414: INFO: Waiting for pod pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77 to disappear
Jun 11 11:59:11.425: INFO: Pod pod-configmaps-196ad7ca-ff9e-4872-8f9b-e9e058bcae77 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:11.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1167" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3538,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:11.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jun 11 11:59:18.056: INFO: Successfully updated pod "adopt-release-6hlhh"
STEP: Checking that the Job readopts the Pod
Jun 11 11:59:18.057: INFO: Waiting up to 15m0s for pod "adopt-release-6hlhh" in namespace "job-6262" to be "adopted"
Jun 11 11:59:18.078: INFO: Pod "adopt-release-6hlhh": Phase="Running", Reason="", readiness=true. Elapsed: 21.802701ms
Jun 11 11:59:20.082: INFO: Pod "adopt-release-6hlhh": Phase="Running", Reason="", readiness=true. Elapsed: 2.025423247s
Jun 11 11:59:20.082: INFO: Pod "adopt-release-6hlhh" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jun 11 11:59:20.592: INFO: Successfully updated pod "adopt-release-6hlhh"
STEP: Checking that the Job releases the Pod
Jun 11 11:59:20.592: INFO: Waiting up to 15m0s for pod "adopt-release-6hlhh" in namespace "job-6262" to be "released"
Jun 11 11:59:20.612: INFO: Pod "adopt-release-6hlhh": Phase="Running", Reason="", readiness=true. Elapsed: 20.386952ms
Jun 11 11:59:22.628: INFO: Pod "adopt-release-6hlhh": Phase="Running", Reason="", readiness=true. Elapsed: 2.036488303s
Jun 11 11:59:22.628: INFO: Pod "adopt-release-6hlhh" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:22.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6262" for this suite.

• [SLOW TEST:11.320 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":214,"skipped":3554,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:22.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-99ab66c7-c4b6-4edf-adfe-430704829f87
STEP: Creating a pod to test consume configMaps
Jun 11 11:59:23.002: INFO: Waiting up to 5m0s for pod "pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293" in namespace "configmap-6979" to be "Succeeded or Failed"
Jun 11 11:59:23.006: INFO: Pod "pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219775ms
Jun 11 11:59:25.010: INFO: Pod "pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007541251s
Jun 11 11:59:27.014: INFO: Pod "pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011510258s
STEP: Saw pod success
Jun 11 11:59:27.014: INFO: Pod "pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293" satisfied condition "Succeeded or Failed"
Jun 11 11:59:27.017: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293 container configmap-volume-test: 
STEP: delete the pod
Jun 11 11:59:27.031: INFO: Waiting for pod pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293 to disappear
Jun 11 11:59:27.061: INFO: Pod pod-configmaps-4dbeca0c-76d3-484d-91fb-96ae9b205293 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:27.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6979" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3570,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:27.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 11:59:28.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 11:59:30.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 11:59:32.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473568, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 11:59:35.719: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:46.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9538" for this suite.
STEP: Destroying namespace "webhook-9538-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.131 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":216,"skipped":3628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:46.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:46.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4930" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":217,"skipped":3674,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:46.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-4b47e9e3-f46e-4ac1-b0d5-b2cb7ef1f4cc
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-4b47e9e3-f46e-4ac1-b0d5-b2cb7ef1f4cc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:53.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4157" for this suite.

• [SLOW TEST:6.993 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3700,"failed":0}
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:53.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Jun 11 11:59:53.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
Jun 11 11:59:54.086: INFO: stderr: ""
Jun 11 11:59:54.086: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 11:59:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7674" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":219,"skipped":3700,"failed":0}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 11:59:54.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-qq4v
STEP: Creating a pod to test atomic-volume-subpath
Jun 11 11:59:54.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qq4v" in namespace "subpath-984" to be "Succeeded or Failed"
Jun 11 11:59:54.211: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.318056ms
Jun 11 11:59:56.243: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048421651s
Jun 11 11:59:58.247: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 4.052689685s
Jun 11 12:00:00.250: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 6.055974458s
Jun 11 12:00:02.290: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 8.095300802s
Jun 11 12:00:04.350: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 10.1557587s
Jun 11 12:00:06.386: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 12.192191901s
Jun 11 12:00:08.390: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 14.195736919s
Jun 11 12:00:10.393: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 16.198680388s
Jun 11 12:00:12.397: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 18.202788493s
Jun 11 12:00:14.410: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 20.215817882s
Jun 11 12:00:16.415: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Running", Reason="", readiness=true. Elapsed: 22.220422404s
Jun 11 12:00:18.451: INFO: Pod "pod-subpath-test-downwardapi-qq4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.256830625s
STEP: Saw pod success
Jun 11 12:00:18.451: INFO: Pod "pod-subpath-test-downwardapi-qq4v" satisfied condition "Succeeded or Failed"
Jun 11 12:00:18.454: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-qq4v container test-container-subpath-downwardapi-qq4v: 
STEP: delete the pod
Jun 11 12:00:18.612: INFO: Waiting for pod pod-subpath-test-downwardapi-qq4v to disappear
Jun 11 12:00:18.643: INFO: Pod pod-subpath-test-downwardapi-qq4v no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qq4v
Jun 11 12:00:18.643: INFO: Deleting pod "pod-subpath-test-downwardapi-qq4v" in namespace "subpath-984"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:00:18.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-984" for this suite.

• [SLOW TEST:24.583 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":220,"skipped":3700,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:00:18.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:00:19.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:00:21.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473620, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473620, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473620, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473619, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:00:24.864: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:00:25.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5510" for this suite.
STEP: Destroying namespace "webhook-5510-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.091 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":221,"skipped":3706,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:00:25.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:00:29.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1853" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3713,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:00:29.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-777568d0-f344-47f7-bbe7-e1f9ce117307 in namespace container-probe-3961
Jun 11 12:00:33.937: INFO: Started pod busybox-777568d0-f344-47f7-bbe7-e1f9ce117307 in namespace container-probe-3961
STEP: checking the pod's current state and verifying that restartCount is present
Jun 11 12:00:33.940: INFO: Initial restart count of pod busybox-777568d0-f344-47f7-bbe7-e1f9ce117307 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:04:35.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3961" for this suite.

• [SLOW TEST:245.190 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3738,"failed":0}
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:04:35.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:09.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6333" for this suite.

• [SLOW TEST:34.424 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3738,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:09.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-38738083-cda8-43ef-8b69-57d9d96e934c
STEP: Creating a pod to test consume secrets
Jun 11 12:05:09.629: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba" in namespace "projected-1250" to be "Succeeded or Failed"
Jun 11 12:05:09.638: INFO: Pod "pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba": Phase="Pending", Reason="", readiness=false. Elapsed: 9.040211ms
Jun 11 12:05:11.862: INFO: Pod "pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232528611s
Jun 11 12:05:13.866: INFO: Pod "pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.236405528s
STEP: Saw pod success
Jun 11 12:05:13.866: INFO: Pod "pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba" satisfied condition "Succeeded or Failed"
Jun 11 12:05:13.868: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba container projected-secret-volume-test: 
STEP: delete the pod
Jun 11 12:05:14.092: INFO: Waiting for pod pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba to disappear
Jun 11 12:05:14.102: INFO: Pod pod-projected-secrets-6834ff7b-6ca1-404f-af36-b40478ac33ba no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:14.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1250" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3739,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:14.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jun 11 12:05:14.195: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:21.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8053" for this suite.

• [SLOW TEST:7.716 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":226,"skipped":3742,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:21.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jun 11 12:05:36.845: INFO: Successfully updated pod "labelsupdatef1da9740-778c-4835-8c76-2ee985353aa7"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:38.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-584" for this suite.

• [SLOW TEST:17.060 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3745,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:38.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:05:39.793: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:05:41.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473939, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473939, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473939, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473939, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:05:44.934: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:45.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-384" for this suite.
STEP: Destroying namespace "webhook-384-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.341 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":228,"skipped":3750,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:47.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-18a02b63-7015-489b-8d1b-dddda334880f
STEP: Creating configMap with name cm-test-opt-upd-f5633b3b-f8a1-4334-b955-1bab6b2d9018
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-18a02b63-7015-489b-8d1b-dddda334880f
STEP: Updating configmap cm-test-opt-upd-f5633b3b-f8a1-4334-b955-1bab6b2d9018
STEP: Creating configMap with name cm-test-opt-create-c8565af0-5782-4885-b0ec-67723fce3599
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:05:58.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9637" for this suite.

• [SLOW TEST:10.954 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3776,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:05:58.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jun 11 12:05:58.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527361 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:05:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 12:05:58.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527362 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:05:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 12:05:58.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527363 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:05:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jun 11 12:06:08.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527414 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:06:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 12:06:08.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527415 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:06:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 11 12:06:08.381: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6684 /api/v1/namespaces/watch-6684/configmaps/e2e-watch-test-label-changed 1dc55555-d988-401d-afec-de8122e7b9ae 11527416 0 2020-06-11 12:05:58 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-11 12:06:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:08.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6684" for this suite.

• [SLOW TEST:10.206 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":230,"skipped":3798,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:08.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-d2ebf2dc-1dd2-402a-9ad2-bab9388c8f16
STEP: Creating a pod to test consume secrets
Jun 11 12:06:08.454: INFO: Waiting up to 5m0s for pod "pod-secrets-e269a091-2362-4105-8add-0deae9f93867" in namespace "secrets-265" to be "Succeeded or Failed"
Jun 11 12:06:08.471: INFO: Pod "pod-secrets-e269a091-2362-4105-8add-0deae9f93867": Phase="Pending", Reason="", readiness=false. Elapsed: 17.710476ms
Jun 11 12:06:10.503: INFO: Pod "pod-secrets-e269a091-2362-4105-8add-0deae9f93867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048894293s
Jun 11 12:06:12.527: INFO: Pod "pod-secrets-e269a091-2362-4105-8add-0deae9f93867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072891229s
STEP: Saw pod success
Jun 11 12:06:12.527: INFO: Pod "pod-secrets-e269a091-2362-4105-8add-0deae9f93867" satisfied condition "Succeeded or Failed"
Jun 11 12:06:12.530: INFO: Trying to get logs from node kali-worker pod pod-secrets-e269a091-2362-4105-8add-0deae9f93867 container secret-volume-test: 
STEP: delete the pod
Jun 11 12:06:12.691: INFO: Waiting for pod pod-secrets-e269a091-2362-4105-8add-0deae9f93867 to disappear
Jun 11 12:06:12.695: INFO: Pod pod-secrets-e269a091-2362-4105-8add-0deae9f93867 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:12.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-265" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3840,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:12.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jun 11 12:06:12.847: INFO: Pod name pod-release: Found 0 pods out of 1
Jun 11 12:06:17.898: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:17.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5119" for this suite.

• [SLOW TEST:5.372 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":232,"skipped":3842,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:18.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 11 12:06:29.028: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:29.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3457" for this suite.

• [SLOW TEST:11.094 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3859,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:29.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:06:30.036: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:06:32.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473989, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 12:06:34.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473990, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727473989, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:06:38.029: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:38.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9083" for this suite.
STEP: Destroying namespace "webhook-9083-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.096 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":234,"skipped":3868,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:38.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:06:38.326: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:39.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1499" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":235,"skipped":3888,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:39.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-c3769baa-0859-46b1-9932-3efb8dde0347
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:06:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3798" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":236,"skipped":3889,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:06:39.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Jun 11 12:06:39.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5015 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jun 11 12:06:42.319: INFO: stderr: ""
Jun 11 12:06:42.319: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Jun 11 12:06:42.319: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jun 11 12:06:42.319: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5015" to be "running and ready, or succeeded"
Jun 11 12:06:42.339: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 19.53516ms
Jun 11 12:06:44.443: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124017748s
Jun 11 12:06:46.447: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.128041745s
Jun 11 12:06:46.447: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jun 11 12:06:46.447: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jun 11 12:06:46.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015'
Jun 11 12:06:46.642: INFO: stderr: ""
Jun 11 12:06:46.642: INFO: stdout: "I0611 12:06:45.382296       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/bwm5 536\nI0611 12:06:45.582483       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/j7rz 437\nI0611 12:06:45.782541       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/cn4 386\nI0611 12:06:45.982466       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/58d6 394\nI0611 12:06:46.182507       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/djv 334\nI0611 12:06:46.382502       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/qrd4 411\nI0611 12:06:46.582484       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/zwpc 284\n"
STEP: limiting log lines
Jun 11 12:06:46.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015 --tail=1'
Jun 11 12:06:46.740: INFO: stderr: ""
Jun 11 12:06:46.740: INFO: stdout: "I0611 12:06:46.582484       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/zwpc 284\n"
Jun 11 12:06:46.740: INFO: got output "I0611 12:06:46.582484       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/zwpc 284\n"
STEP: limiting log bytes
Jun 11 12:06:46.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015 --limit-bytes=1'
Jun 11 12:06:46.843: INFO: stderr: ""
Jun 11 12:06:46.843: INFO: stdout: "I"
Jun 11 12:06:46.843: INFO: got output "I"
STEP: exposing timestamps
Jun 11 12:06:46.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015 --tail=1 --timestamps'
Jun 11 12:06:46.954: INFO: stderr: ""
Jun 11 12:06:46.954: INFO: stdout: "2020-06-11T12:06:46.782645889Z I0611 12:06:46.782481       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/4tjs 557\n"
Jun 11 12:06:46.954: INFO: got output "2020-06-11T12:06:46.782645889Z I0611 12:06:46.782481       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/4tjs 557\n"
STEP: restricting to a time range
Jun 11 12:06:49.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015 --since=1s'
Jun 11 12:06:49.776: INFO: stderr: ""
Jun 11 12:06:49.776: INFO: stdout: "I0611 12:06:48.782509       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/k9dr 571\nI0611 12:06:48.982511       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/rpg 474\nI0611 12:06:49.182461       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/82xn 325\nI0611 12:06:49.382488       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/zjhl 299\nI0611 12:06:49.582530       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/ljs 279\n"
Jun 11 12:06:49.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5015 --since=24h'
Jun 11 12:06:49.893: INFO: stderr: ""
Jun 11 12:06:49.893: INFO: stdout: "I0611 12:06:45.382296       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/bwm5 536\nI0611 12:06:45.582483       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/j7rz 437\nI0611 12:06:45.782541       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/cn4 386\nI0611 12:06:45.982466       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/58d6 394\nI0611 12:06:46.182507       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/djv 334\nI0611 12:06:46.382502       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/qrd4 411\nI0611 12:06:46.582484       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/zwpc 284\nI0611 12:06:46.782481       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/4tjs 557\nI0611 12:06:46.982481       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/7r6 549\nI0611 12:06:47.182491       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/kwr 226\nI0611 12:06:47.382482       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/twb 386\nI0611 12:06:47.582448       1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/2gd 496\nI0611 12:06:47.782511       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/tc74 378\nI0611 12:06:47.982464       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/rnl 578\nI0611 12:06:48.182570       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/pbjk 559\nI0611 12:06:48.382526       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/hr5n 352\nI0611 12:06:48.582531       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/ngnh 415\nI0611 12:06:48.782509       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/k9dr 571\nI0611 12:06:48.982511       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/rpg 474\nI0611 12:06:49.182461       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/82xn 325\nI0611 12:06:49.382488       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/zjhl 299\nI0611 12:06:49.582530       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/ljs 279\nI0611 12:06:49.782449       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/64hk 202\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Jun 11 12:06:49.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5015'
Jun 11 12:07:03.730: INFO: stderr: ""
Jun 11 12:07:03.730: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:03.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5015" for this suite.

• [SLOW TEST:24.105 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":237,"skipped":3893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:03.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jun 11 12:07:03.819: INFO: Waiting up to 5m0s for pod "downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504" in namespace "downward-api-2280" to be "Succeeded or Failed"
Jun 11 12:07:03.832: INFO: Pod "downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504": Phase="Pending", Reason="", readiness=false. Elapsed: 13.040911ms
Jun 11 12:07:05.837: INFO: Pod "downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01781276s
Jun 11 12:07:07.842: INFO: Pod "downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022756113s
STEP: Saw pod success
Jun 11 12:07:07.842: INFO: Pod "downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504" satisfied condition "Succeeded or Failed"
Jun 11 12:07:07.845: INFO: Trying to get logs from node kali-worker2 pod downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504 container dapi-container: 
STEP: delete the pod
Jun 11 12:07:07.910: INFO: Waiting for pod downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504 to disappear
Jun 11 12:07:07.994: INFO: Pod downward-api-564aa7a0-469f-437b-9f5b-a3cf0d89a504 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2280" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4048,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:08.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6212
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 11 12:07:08.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 11 12:07:08.299: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 12:07:10.396: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 12:07:12.318: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 11 12:07:14.303: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:16.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:18.302: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:20.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:22.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:24.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:26.303: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 11 12:07:28.304: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 11 12:07:28.310: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 11 12:07:30.315: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 11 12:07:34.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.41:8080/dial?request=hostname&protocol=http&host=10.244.2.114&port=8080&tries=1'] Namespace:pod-network-test-6212 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 12:07:34.343: INFO: >>> kubeConfig: /root/.kube/config
I0611 12:07:34.375035       7 log.go:172] (0xc001f002c0) (0xc00129b040) Create stream
I0611 12:07:34.375070       7 log.go:172] (0xc001f002c0) (0xc00129b040) Stream added, broadcasting: 1
I0611 12:07:34.377495       7 log.go:172] (0xc001f002c0) Reply frame received for 1
I0611 12:07:34.377529       7 log.go:172] (0xc001f002c0) (0xc00129b0e0) Create stream
I0611 12:07:34.377542       7 log.go:172] (0xc001f002c0) (0xc00129b0e0) Stream added, broadcasting: 3
I0611 12:07:34.378412       7 log.go:172] (0xc001f002c0) Reply frame received for 3
I0611 12:07:34.378451       7 log.go:172] (0xc001f002c0) (0xc002a64280) Create stream
I0611 12:07:34.378467       7 log.go:172] (0xc001f002c0) (0xc002a64280) Stream added, broadcasting: 5
I0611 12:07:34.379173       7 log.go:172] (0xc001f002c0) Reply frame received for 5
I0611 12:07:34.591358       7 log.go:172] (0xc001f002c0) Data frame received for 3
I0611 12:07:34.591399       7 log.go:172] (0xc00129b0e0) (3) Data frame handling
I0611 12:07:34.591427       7 log.go:172] (0xc00129b0e0) (3) Data frame sent
I0611 12:07:34.591945       7 log.go:172] (0xc001f002c0) Data frame received for 5
I0611 12:07:34.591968       7 log.go:172] (0xc002a64280) (5) Data frame handling
I0611 12:07:34.591997       7 log.go:172] (0xc001f002c0) Data frame received for 3
I0611 12:07:34.592068       7 log.go:172] (0xc00129b0e0) (3) Data frame handling
I0611 12:07:34.594270       7 log.go:172] (0xc001f002c0) Data frame received for 1
I0611 12:07:34.594311       7 log.go:172] (0xc00129b040) (1) Data frame handling
I0611 12:07:34.594340       7 log.go:172] (0xc00129b040) (1) Data frame sent
I0611 12:07:34.594373       7 log.go:172] (0xc001f002c0) (0xc00129b040) Stream removed, broadcasting: 1
I0611 12:07:34.594409       7 log.go:172] (0xc001f002c0) Go away received
I0611 12:07:34.594500       7 log.go:172] (0xc001f002c0) (0xc00129b040) Stream removed, broadcasting: 1
I0611 12:07:34.594533       7 log.go:172] (0xc001f002c0) (0xc00129b0e0) Stream removed, broadcasting: 3
I0611 12:07:34.594546       7 log.go:172] (0xc001f002c0) (0xc002a64280) Stream removed, broadcasting: 5
Jun 11 12:07:34.594: INFO: Waiting for responses: map[]
Jun 11 12:07:34.598: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.41:8080/dial?request=hostname&protocol=http&host=10.244.1.40&port=8080&tries=1'] Namespace:pod-network-test-6212 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 11 12:07:34.598: INFO: >>> kubeConfig: /root/.kube/config
I0611 12:07:34.626734       7 log.go:172] (0xc001e9a580) (0xc0024268c0) Create stream
I0611 12:07:34.626762       7 log.go:172] (0xc001e9a580) (0xc0024268c0) Stream added, broadcasting: 1
I0611 12:07:34.629791       7 log.go:172] (0xc001e9a580) Reply frame received for 1
I0611 12:07:34.629830       7 log.go:172] (0xc001e9a580) (0xc00129b180) Create stream
I0611 12:07:34.629843       7 log.go:172] (0xc001e9a580) (0xc00129b180) Stream added, broadcasting: 3
I0611 12:07:34.630673       7 log.go:172] (0xc001e9a580) Reply frame received for 3
I0611 12:07:34.630707       7 log.go:172] (0xc001e9a580) (0xc00129b400) Create stream
I0611 12:07:34.630720       7 log.go:172] (0xc001e9a580) (0xc00129b400) Stream added, broadcasting: 5
I0611 12:07:34.631710       7 log.go:172] (0xc001e9a580) Reply frame received for 5
I0611 12:07:34.682911       7 log.go:172] (0xc001e9a580) Data frame received for 3
I0611 12:07:34.682934       7 log.go:172] (0xc00129b180) (3) Data frame handling
I0611 12:07:34.682947       7 log.go:172] (0xc00129b180) (3) Data frame sent
I0611 12:07:34.683512       7 log.go:172] (0xc001e9a580) Data frame received for 5
I0611 12:07:34.683560       7 log.go:172] (0xc00129b400) (5) Data frame handling
I0611 12:07:34.683592       7 log.go:172] (0xc001e9a580) Data frame received for 3
I0611 12:07:34.683609       7 log.go:172] (0xc00129b180) (3) Data frame handling
I0611 12:07:34.685609       7 log.go:172] (0xc001e9a580) Data frame received for 1
I0611 12:07:34.685625       7 log.go:172] (0xc0024268c0) (1) Data frame handling
I0611 12:07:34.685638       7 log.go:172] (0xc0024268c0) (1) Data frame sent
I0611 12:07:34.685651       7 log.go:172] (0xc001e9a580) (0xc0024268c0) Stream removed, broadcasting: 1
I0611 12:07:34.685730       7 log.go:172] (0xc001e9a580) (0xc0024268c0) Stream removed, broadcasting: 1
I0611 12:07:34.685747       7 log.go:172] (0xc001e9a580) (0xc00129b180) Stream removed, broadcasting: 3
I0611 12:07:34.685837       7 log.go:172] (0xc001e9a580) Go away received
I0611 12:07:34.685861       7 log.go:172] (0xc001e9a580) (0xc00129b400) Stream removed, broadcasting: 5
Jun 11 12:07:34.685: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:34.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6212" for this suite.

• [SLOW TEST:26.681 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4051,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:34.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 11 12:07:34.769: INFO: Waiting up to 5m0s for pod "pod-08e99714-7480-4fbe-9741-d9b40d1bc866" in namespace "emptydir-8334" to be "Succeeded or Failed"
Jun 11 12:07:34.808: INFO: Pod "pod-08e99714-7480-4fbe-9741-d9b40d1bc866": Phase="Pending", Reason="", readiness=false. Elapsed: 38.794003ms
Jun 11 12:07:36.851: INFO: Pod "pod-08e99714-7480-4fbe-9741-d9b40d1bc866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08160922s
Jun 11 12:07:38.854: INFO: Pod "pod-08e99714-7480-4fbe-9741-d9b40d1bc866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085385434s
STEP: Saw pod success
Jun 11 12:07:38.854: INFO: Pod "pod-08e99714-7480-4fbe-9741-d9b40d1bc866" satisfied condition "Succeeded or Failed"
Jun 11 12:07:38.857: INFO: Trying to get logs from node kali-worker pod pod-08e99714-7480-4fbe-9741-d9b40d1bc866 container test-container: 
STEP: delete the pod
Jun 11 12:07:38.885: INFO: Waiting for pod pod-08e99714-7480-4fbe-9741-d9b40d1bc866 to disappear
Jun 11 12:07:38.889: INFO: Pod pod-08e99714-7480-4fbe-9741-d9b40d1bc866 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:38.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8334" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4054,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:38.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:07:39.790: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:07:41.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 11 12:07:43.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474059, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:07:47.034: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:47.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4185" for this suite.
STEP: Destroying namespace "webhook-4185-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.401 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":241,"skipped":4061,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:47.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:07:48.449: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:07:50.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474068, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474068, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474068, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474068, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:07:53.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jun 11 12:07:53.906: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:07:54.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5415" for this suite.
STEP: Destroying namespace "webhook-5415-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.098 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":242,"skipped":4064,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:07:54.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0611 12:08:35.894115       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 12:08:35.894: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:08:35.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9876" for this suite.

• [SLOW TEST:41.479 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":243,"skipped":4104,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:08:35.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Jun 11 12:08:35.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
Jun 11 12:08:36.115: INFO: stderr: ""
Jun 11 12:08:36.115: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:08:36.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-559" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":244,"skipped":4153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:08:36.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:08:36.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7387
I0611 12:08:36.299183       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7387, replica count: 1
I0611 12:08:37.349604       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 12:08:38.349905       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 12:08:39.350226       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0611 12:08:40.350493       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 11 12:08:40.534: INFO: Created: latency-svc-v7pbv
Jun 11 12:08:40.542: INFO: Got endpoints: latency-svc-v7pbv [91.191346ms]
Jun 11 12:08:40.568: INFO: Created: latency-svc-926zt
Jun 11 12:08:40.582: INFO: Got endpoints: latency-svc-926zt [40.36026ms]
Jun 11 12:08:40.611: INFO: Created: latency-svc-p28rb
Jun 11 12:08:40.630: INFO: Got endpoints: latency-svc-p28rb [88.694135ms]
Jun 11 12:08:40.714: INFO: Created: latency-svc-2bgb7
Jun 11 12:08:40.720: INFO: Got endpoints: latency-svc-2bgb7 [178.364832ms]
Jun 11 12:08:40.755: INFO: Created: latency-svc-ddpn5
Jun 11 12:08:40.770: INFO: Got endpoints: latency-svc-ddpn5 [228.143402ms]
Jun 11 12:08:40.790: INFO: Created: latency-svc-p6zz5
Jun 11 12:08:40.867: INFO: Got endpoints: latency-svc-p6zz5 [324.917901ms]
Jun 11 12:08:40.890: INFO: Created: latency-svc-pcc9l
Jun 11 12:08:40.907: INFO: Got endpoints: latency-svc-pcc9l [364.725515ms]
Jun 11 12:08:41.056: INFO: Created: latency-svc-mhkpr
Jun 11 12:08:41.125: INFO: Got endpoints: latency-svc-mhkpr [583.093652ms]
Jun 11 12:08:41.199: INFO: Created: latency-svc-qc5rp
Jun 11 12:08:41.229: INFO: Got endpoints: latency-svc-qc5rp [686.762111ms]
Jun 11 12:08:41.257: INFO: Created: latency-svc-t4t95
Jun 11 12:08:41.268: INFO: Got endpoints: latency-svc-t4t95 [726.211389ms]
Jun 11 12:08:41.336: INFO: Created: latency-svc-qvshf
Jun 11 12:08:41.346: INFO: Got endpoints: latency-svc-qvshf [803.675437ms]
Jun 11 12:08:41.367: INFO: Created: latency-svc-gg92r
Jun 11 12:08:41.396: INFO: Got endpoints: latency-svc-gg92r [854.143344ms]
Jun 11 12:08:41.433: INFO: Created: latency-svc-zrb8x
Jun 11 12:08:41.462: INFO: Got endpoints: latency-svc-zrb8x [919.906768ms]
Jun 11 12:08:41.732: INFO: Created: latency-svc-2fzdn
Jun 11 12:08:41.930: INFO: Got endpoints: latency-svc-2fzdn [1.387471976s]
Jun 11 12:08:41.998: INFO: Created: latency-svc-9x72g
Jun 11 12:08:42.201: INFO: Got endpoints: latency-svc-9x72g [1.658649495s]
Jun 11 12:08:42.371: INFO: Created: latency-svc-6zcrk
Jun 11 12:08:42.428: INFO: Got endpoints: latency-svc-6zcrk [1.885618121s]
Jun 11 12:08:42.609: INFO: Created: latency-svc-d8jg6
Jun 11 12:08:42.871: INFO: Got endpoints: latency-svc-d8jg6 [2.288386666s]
Jun 11 12:08:43.056: INFO: Created: latency-svc-q8wb8
Jun 11 12:08:43.100: INFO: Got endpoints: latency-svc-q8wb8 [2.469983718s]
Jun 11 12:08:43.120: INFO: Created: latency-svc-xqqnm
Jun 11 12:08:43.511: INFO: Got endpoints: latency-svc-xqqnm [2.791088226s]
Jun 11 12:08:43.711: INFO: Created: latency-svc-jgdsf
Jun 11 12:08:43.734: INFO: Got endpoints: latency-svc-jgdsf [2.964115171s]
Jun 11 12:08:44.082: INFO: Created: latency-svc-rshgr
Jun 11 12:08:44.279: INFO: Got endpoints: latency-svc-rshgr [3.412344109s]
Jun 11 12:08:44.700: INFO: Created: latency-svc-zp7pq
Jun 11 12:08:44.923: INFO: Got endpoints: latency-svc-zp7pq [4.016301626s]
Jun 11 12:08:45.212: INFO: Created: latency-svc-26wrn
Jun 11 12:08:45.535: INFO: Got endpoints: latency-svc-26wrn [4.409693745s]
Jun 11 12:08:46.165: INFO: Created: latency-svc-cd9gh
Jun 11 12:08:46.432: INFO: Got endpoints: latency-svc-cd9gh [5.203627305s]
Jun 11 12:08:46.442: INFO: Created: latency-svc-rxhrf
Jun 11 12:08:46.837: INFO: Got endpoints: latency-svc-rxhrf [5.569043085s]
Jun 11 12:08:47.217: INFO: Created: latency-svc-txlw9
Jun 11 12:08:47.533: INFO: Got endpoints: latency-svc-txlw9 [6.186788997s]
Jun 11 12:08:47.905: INFO: Created: latency-svc-bhl4g
Jun 11 12:08:47.927: INFO: Got endpoints: latency-svc-bhl4g [6.531168138s]
Jun 11 12:08:48.169: INFO: Created: latency-svc-7nww9
Jun 11 12:08:48.493: INFO: Created: latency-svc-ntkpf
Jun 11 12:08:48.493: INFO: Got endpoints: latency-svc-7nww9 [7.030985271s]
Jun 11 12:08:48.802: INFO: Got endpoints: latency-svc-ntkpf [6.872630234s]
Jun 11 12:08:49.432: INFO: Created: latency-svc-fcwjj
Jun 11 12:08:49.475: INFO: Got endpoints: latency-svc-fcwjj [7.274412575s]
Jun 11 12:08:49.476: INFO: Created: latency-svc-bgkx7
Jun 11 12:08:49.497: INFO: Got endpoints: latency-svc-bgkx7 [7.069181232s]
Jun 11 12:08:49.723: INFO: Created: latency-svc-sl7fz
Jun 11 12:08:49.935: INFO: Created: latency-svc-2gw2j
Jun 11 12:08:49.935: INFO: Got endpoints: latency-svc-sl7fz [7.064546053s]
Jun 11 12:08:49.953: INFO: Got endpoints: latency-svc-2gw2j [6.852773547s]
Jun 11 12:08:50.304: INFO: Created: latency-svc-975mf
Jun 11 12:08:50.308: INFO: Got endpoints: latency-svc-975mf [6.797025641s]
Jun 11 12:08:50.357: INFO: Created: latency-svc-s8z26
Jun 11 12:08:50.374: INFO: Got endpoints: latency-svc-s8z26 [6.640245877s]
Jun 11 12:08:50.457: INFO: Created: latency-svc-zv77t
Jun 11 12:08:50.464: INFO: Got endpoints: latency-svc-zv77t [6.184223585s]
Jun 11 12:08:50.525: INFO: Created: latency-svc-6ztfd
Jun 11 12:08:50.542: INFO: Got endpoints: latency-svc-6ztfd [5.618983551s]
Jun 11 12:08:50.637: INFO: Created: latency-svc-fxrzg
Jun 11 12:08:50.651: INFO: Got endpoints: latency-svc-fxrzg [5.115743698s]
Jun 11 12:08:50.712: INFO: Created: latency-svc-lt7ht
Jun 11 12:08:50.798: INFO: Got endpoints: latency-svc-lt7ht [4.365592761s]
Jun 11 12:08:50.831: INFO: Created: latency-svc-d66h6
Jun 11 12:08:50.854: INFO: Got endpoints: latency-svc-d66h6 [4.016387219s]
Jun 11 12:08:50.886: INFO: Created: latency-svc-lnjkz
Jun 11 12:08:50.962: INFO: Got endpoints: latency-svc-lnjkz [3.429629347s]
Jun 11 12:08:51.010: INFO: Created: latency-svc-9vv6s
Jun 11 12:08:51.023: INFO: Got endpoints: latency-svc-9vv6s [3.095747302s]
Jun 11 12:08:51.045: INFO: Created: latency-svc-tt2kw
Jun 11 12:08:51.091: INFO: Got endpoints: latency-svc-tt2kw [2.597895256s]
Jun 11 12:08:51.138: INFO: Created: latency-svc-2nl75
Jun 11 12:08:51.143: INFO: Got endpoints: latency-svc-2nl75 [2.340596342s]
Jun 11 12:08:51.223: INFO: Created: latency-svc-5zx5m
Jun 11 12:08:51.234: INFO: Got endpoints: latency-svc-5zx5m [1.75892549s]
Jun 11 12:08:51.255: INFO: Created: latency-svc-2d2z2
Jun 11 12:08:51.288: INFO: Got endpoints: latency-svc-2d2z2 [1.790742031s]
Jun 11 12:08:51.384: INFO: Created: latency-svc-85gsh
Jun 11 12:08:51.402: INFO: Got endpoints: latency-svc-85gsh [1.466427462s]
Jun 11 12:08:51.442: INFO: Created: latency-svc-5c6bf
Jun 11 12:08:51.472: INFO: Got endpoints: latency-svc-5c6bf [1.518355005s]
Jun 11 12:08:51.528: INFO: Created: latency-svc-jqwq4
Jun 11 12:08:51.535: INFO: Got endpoints: latency-svc-jqwq4 [1.22668983s]
Jun 11 12:08:51.570: INFO: Created: latency-svc-tl5qx
Jun 11 12:08:51.583: INFO: Got endpoints: latency-svc-tl5qx [1.208633113s]
Jun 11 12:08:51.622: INFO: Created: latency-svc-6j5lz
Jun 11 12:08:51.684: INFO: Got endpoints: latency-svc-6j5lz [1.220025853s]
Jun 11 12:08:51.766: INFO: Created: latency-svc-frk9n
Jun 11 12:08:51.821: INFO: Got endpoints: latency-svc-frk9n [1.279126791s]
Jun 11 12:08:51.858: INFO: Created: latency-svc-8vzkr
Jun 11 12:08:51.873: INFO: Got endpoints: latency-svc-8vzkr [1.22208423s]
Jun 11 12:08:51.900: INFO: Created: latency-svc-pql55
Jun 11 12:08:51.960: INFO: Got endpoints: latency-svc-pql55 [1.162013499s]
Jun 11 12:08:52.000: INFO: Created: latency-svc-b9q2q
Jun 11 12:08:52.010: INFO: Got endpoints: latency-svc-b9q2q [1.156529766s]
Jun 11 12:08:52.050: INFO: Created: latency-svc-nk7pw
Jun 11 12:08:52.114: INFO: Got endpoints: latency-svc-nk7pw [1.151756073s]
Jun 11 12:08:52.138: INFO: Created: latency-svc-b9m68
Jun 11 12:08:52.155: INFO: Got endpoints: latency-svc-b9m68 [1.13218199s]
Jun 11 12:08:52.276: INFO: Created: latency-svc-5tbjw
Jun 11 12:08:52.306: INFO: Got endpoints: latency-svc-5tbjw [1.214676972s]
Jun 11 12:08:52.326: INFO: Created: latency-svc-sdp8q
Jun 11 12:08:52.342: INFO: Got endpoints: latency-svc-sdp8q [1.19881714s]
Jun 11 12:08:52.359: INFO: Created: latency-svc-6wbsk
Jun 11 12:08:52.372: INFO: Got endpoints: latency-svc-6wbsk [1.137716935s]
Jun 11 12:08:52.440: INFO: Created: latency-svc-tbf9h
Jun 11 12:08:52.456: INFO: Got endpoints: latency-svc-tbf9h [1.167856175s]
Jun 11 12:08:52.506: INFO: Created: latency-svc-jjnwx
Jun 11 12:08:52.564: INFO: Got endpoints: latency-svc-jjnwx [1.162665838s]
Jun 11 12:08:52.606: INFO: Created: latency-svc-4q2lv
Jun 11 12:08:52.656: INFO: Got endpoints: latency-svc-4q2lv [1.1839193s]
Jun 11 12:08:52.722: INFO: Created: latency-svc-ld27j
Jun 11 12:08:52.743: INFO: Got endpoints: latency-svc-ld27j [1.208268029s]
Jun 11 12:08:52.774: INFO: Created: latency-svc-8whn9
Jun 11 12:08:52.798: INFO: Got endpoints: latency-svc-8whn9 [1.214452996s]
Jun 11 12:08:52.857: INFO: Created: latency-svc-6ftrc
Jun 11 12:08:52.864: INFO: Got endpoints: latency-svc-6ftrc [1.180185676s]
Jun 11 12:08:52.889: INFO: Created: latency-svc-25gck
Jun 11 12:08:52.906: INFO: Got endpoints: latency-svc-25gck [1.085142016s]
Jun 11 12:08:52.934: INFO: Created: latency-svc-4hkl8
Jun 11 12:08:52.955: INFO: Got endpoints: latency-svc-4hkl8 [1.081932775s]
Jun 11 12:08:53.068: INFO: Created: latency-svc-c7zpw
Jun 11 12:08:53.081: INFO: Got endpoints: latency-svc-c7zpw [1.120668527s]
Jun 11 12:08:53.142: INFO: Created: latency-svc-s48sw
Jun 11 12:08:53.222: INFO: Got endpoints: latency-svc-s48sw [1.211867223s]
Jun 11 12:08:53.250: INFO: Created: latency-svc-mmvz9
Jun 11 12:08:53.261: INFO: Got endpoints: latency-svc-mmvz9 [1.147148958s]
Jun 11 12:08:53.280: INFO: Created: latency-svc-kr5sk
Jun 11 12:08:53.292: INFO: Got endpoints: latency-svc-kr5sk [1.1363357s]
Jun 11 12:08:53.378: INFO: Created: latency-svc-8tq49
Jun 11 12:08:53.392: INFO: Got endpoints: latency-svc-8tq49 [1.086033279s]
Jun 11 12:08:53.424: INFO: Created: latency-svc-hv24n
Jun 11 12:08:53.449: INFO: Got endpoints: latency-svc-hv24n [1.107137061s]
Jun 11 12:08:53.522: INFO: Created: latency-svc-fllh8
Jun 11 12:08:53.542: INFO: Got endpoints: latency-svc-fllh8 [1.169872265s]
Jun 11 12:08:53.596: INFO: Created: latency-svc-btxz8
Jun 11 12:08:53.611: INFO: Got endpoints: latency-svc-btxz8 [1.155019344s]
Jun 11 12:08:53.671: INFO: Created: latency-svc-62hbs
Jun 11 12:08:53.679: INFO: Got endpoints: latency-svc-62hbs [1.114162072s]
Jun 11 12:08:53.730: INFO: Created: latency-svc-mdgfh
Jun 11 12:08:53.743: INFO: Got endpoints: latency-svc-mdgfh [1.087539293s]
Jun 11 12:08:53.845: INFO: Created: latency-svc-nz4cv
Jun 11 12:08:53.851: INFO: Got endpoints: latency-svc-nz4cv [1.107610209s]
Jun 11 12:08:53.869: INFO: Created: latency-svc-qcj96
Jun 11 12:08:53.882: INFO: Got endpoints: latency-svc-qcj96 [1.084458088s]
Jun 11 12:08:53.995: INFO: Created: latency-svc-b2269
Jun 11 12:08:54.002: INFO: Got endpoints: latency-svc-b2269 [1.137992632s]
Jun 11 12:08:54.054: INFO: Created: latency-svc-48rmv
Jun 11 12:08:54.068: INFO: Got endpoints: latency-svc-48rmv [1.161424243s]
Jun 11 12:08:54.151: INFO: Created: latency-svc-f76pb
Jun 11 12:08:54.158: INFO: Got endpoints: latency-svc-f76pb [1.203545614s]
Jun 11 12:08:54.178: INFO: Created: latency-svc-djmvl
Jun 11 12:08:54.195: INFO: Got endpoints: latency-svc-djmvl [1.113772819s]
Jun 11 12:08:54.276: INFO: Created: latency-svc-kbc8m
Jun 11 12:08:54.280: INFO: Got endpoints: latency-svc-kbc8m [1.057116108s]
Jun 11 12:08:54.306: INFO: Created: latency-svc-cfgjh
Jun 11 12:08:54.315: INFO: Got endpoints: latency-svc-cfgjh [1.053542161s]
Jun 11 12:08:54.359: INFO: Created: latency-svc-g8xmn
Jun 11 12:08:54.420: INFO: Got endpoints: latency-svc-g8xmn [1.128135627s]
Jun 11 12:08:54.438: INFO: Created: latency-svc-rgnr5
Jun 11 12:08:54.463: INFO: Got endpoints: latency-svc-rgnr5 [1.07118372s]
Jun 11 12:08:54.492: INFO: Created: latency-svc-t28sb
Jun 11 12:08:54.502: INFO: Got endpoints: latency-svc-t28sb [1.053102426s]
Jun 11 12:08:54.558: INFO: Created: latency-svc-v6fqf
Jun 11 12:08:54.564: INFO: Got endpoints: latency-svc-v6fqf [1.021667063s]
Jun 11 12:08:54.599: INFO: Created: latency-svc-txpf5
Jun 11 12:08:54.611: INFO: Got endpoints: latency-svc-txpf5 [999.904214ms]
Jun 11 12:08:54.629: INFO: Created: latency-svc-qwfh7
Jun 11 12:08:54.641: INFO: Got endpoints: latency-svc-qwfh7 [962.78199ms]
Jun 11 12:08:54.708: INFO: Created: latency-svc-8w2sq
Jun 11 12:08:54.732: INFO: Got endpoints: latency-svc-8w2sq [988.844922ms]
Jun 11 12:08:54.754: INFO: Created: latency-svc-rlkr6
Jun 11 12:08:54.780: INFO: Got endpoints: latency-svc-rlkr6 [928.657866ms]
Jun 11 12:08:54.845: INFO: Created: latency-svc-9zlxh
Jun 11 12:08:54.852: INFO: Got endpoints: latency-svc-9zlxh [969.955795ms]
Jun 11 12:08:54.871: INFO: Created: latency-svc-dmx8c
Jun 11 12:08:54.901: INFO: Got endpoints: latency-svc-dmx8c [899.401758ms]
Jun 11 12:08:54.931: INFO: Created: latency-svc-5lngc
Jun 11 12:08:54.942: INFO: Got endpoints: latency-svc-5lngc [874.167366ms]
Jun 11 12:08:55.024: INFO: Created: latency-svc-bkxdq
Jun 11 12:08:55.038: INFO: Got endpoints: latency-svc-bkxdq [880.007702ms]
Jun 11 12:08:55.056: INFO: Created: latency-svc-827fg
Jun 11 12:08:55.069: INFO: Got endpoints: latency-svc-827fg [874.273018ms]
Jun 11 12:08:55.086: INFO: Created: latency-svc-2qjdw
Jun 11 12:08:55.100: INFO: Got endpoints: latency-svc-2qjdw [820.096595ms]
Jun 11 12:08:55.156: INFO: Created: latency-svc-c828q
Jun 11 12:08:55.160: INFO: Got endpoints: latency-svc-c828q [844.979768ms]
Jun 11 12:08:55.204: INFO: Created: latency-svc-zp5qd
Jun 11 12:08:55.222: INFO: Got endpoints: latency-svc-zp5qd [801.836162ms]
Jun 11 12:08:55.248: INFO: Created: latency-svc-h8bn8
Jun 11 12:08:55.282: INFO: Got endpoints: latency-svc-h8bn8 [818.922299ms]
Jun 11 12:08:55.360: INFO: Created: latency-svc-pr28f
Jun 11 12:08:55.432: INFO: Got endpoints: latency-svc-pr28f [929.756547ms]
Jun 11 12:08:55.436: INFO: Created: latency-svc-645pk
Jun 11 12:08:55.443: INFO: Got endpoints: latency-svc-645pk [879.06981ms]
Jun 11 12:08:55.523: INFO: Created: latency-svc-jghk8
Jun 11 12:08:55.582: INFO: Got endpoints: latency-svc-jghk8 [970.906158ms]
Jun 11 12:08:55.614: INFO: Created: latency-svc-4kh5n
Jun 11 12:08:55.636: INFO: Got endpoints: latency-svc-4kh5n [994.301514ms]
Jun 11 12:08:55.668: INFO: Created: latency-svc-l2mlp
Jun 11 12:08:55.750: INFO: Got endpoints: latency-svc-l2mlp [1.017471353s]
Jun 11 12:08:55.753: INFO: Created: latency-svc-zl9pf
Jun 11 12:08:55.768: INFO: Got endpoints: latency-svc-zl9pf [987.709017ms]
Jun 11 12:08:55.799: INFO: Created: latency-svc-rjxg5
Jun 11 12:08:55.810: INFO: Got endpoints: latency-svc-rjxg5 [957.996226ms]
Jun 11 12:08:55.828: INFO: Created: latency-svc-bzfpt
Jun 11 12:08:55.842: INFO: Got endpoints: latency-svc-bzfpt [940.790665ms]
Jun 11 12:08:55.912: INFO: Created: latency-svc-krzp5
Jun 11 12:08:55.925: INFO: Got endpoints: latency-svc-krzp5 [982.734294ms]
Jun 11 12:08:55.960: INFO: Created: latency-svc-w8wwz
Jun 11 12:08:55.987: INFO: Got endpoints: latency-svc-w8wwz [948.827577ms]
Jun 11 12:08:56.044: INFO: Created: latency-svc-fnt2h
Jun 11 12:08:56.087: INFO: Created: latency-svc-h6c7z
Jun 11 12:08:56.087: INFO: Got endpoints: latency-svc-fnt2h [1.018065298s]
Jun 11 12:08:56.186: INFO: Got endpoints: latency-svc-h6c7z [1.086504002s]
Jun 11 12:08:56.224: INFO: Created: latency-svc-9tc4g
Jun 11 12:08:56.249: INFO: Got endpoints: latency-svc-9tc4g [1.08922729s]
Jun 11 12:08:56.366: INFO: Created: latency-svc-vfwxm
Jun 11 12:08:56.389: INFO: Created: latency-svc-ftjgn
Jun 11 12:08:56.389: INFO: Got endpoints: latency-svc-vfwxm [1.167182008s]
Jun 11 12:08:56.414: INFO: Got endpoints: latency-svc-ftjgn [1.131469507s]
Jun 11 12:08:56.435: INFO: Created: latency-svc-7rxdr
Jun 11 12:08:56.448: INFO: Got endpoints: latency-svc-7rxdr [1.01651494s]
Jun 11 12:08:56.528: INFO: Created: latency-svc-vxnnh
Jun 11 12:08:56.545: INFO: Got endpoints: latency-svc-vxnnh [1.10185468s]
Jun 11 12:08:56.575: INFO: Created: latency-svc-t8jxg
Jun 11 12:08:56.587: INFO: Got endpoints: latency-svc-t8jxg [1.005174639s]
Jun 11 12:08:56.605: INFO: Created: latency-svc-4lpc8
Jun 11 12:08:56.617: INFO: Got endpoints: latency-svc-4lpc8 [981.522826ms]
Jun 11 12:08:56.692: INFO: Created: latency-svc-gpq58
Jun 11 12:08:56.722: INFO: Got endpoints: latency-svc-gpq58 [972.646893ms]
Jun 11 12:08:56.755: INFO: Created: latency-svc-vwt4v
Jun 11 12:08:56.824: INFO: Got endpoints: latency-svc-vwt4v [1.056196457s]
Jun 11 12:08:56.860: INFO: Created: latency-svc-hn5dw
Jun 11 12:08:56.891: INFO: Got endpoints: latency-svc-hn5dw [1.080508009s]
Jun 11 12:08:57.025: INFO: Created: latency-svc-gbqqp
Jun 11 12:08:57.059: INFO: Got endpoints: latency-svc-gbqqp [1.216764155s]
Jun 11 12:08:57.061: INFO: Created: latency-svc-8nfb9
Jun 11 12:08:57.096: INFO: Got endpoints: latency-svc-8nfb9 [1.171344881s]
Jun 11 12:08:57.187: INFO: Created: latency-svc-878wh
Jun 11 12:08:57.233: INFO: Got endpoints: latency-svc-878wh [1.24613271s]
Jun 11 12:08:57.378: INFO: Created: latency-svc-hwjkq
Jun 11 12:08:57.386: INFO: Got endpoints: latency-svc-hwjkq [1.298950558s]
Jun 11 12:08:57.415: INFO: Created: latency-svc-h6kwz
Jun 11 12:08:57.451: INFO: Got endpoints: latency-svc-h6kwz [1.264905613s]
Jun 11 12:08:57.516: INFO: Created: latency-svc-jtth5
Jun 11 12:08:57.522: INFO: Got endpoints: latency-svc-jtth5 [1.272730944s]
Jun 11 12:08:57.560: INFO: Created: latency-svc-94xxr
Jun 11 12:08:57.590: INFO: Got endpoints: latency-svc-94xxr [1.200515105s]
Jun 11 12:08:57.647: INFO: Created: latency-svc-b44rz
Jun 11 12:08:57.678: INFO: Got endpoints: latency-svc-b44rz [1.264382349s]
Jun 11 12:08:57.715: INFO: Created: latency-svc-xkq6k
Jun 11 12:08:57.794: INFO: Got endpoints: latency-svc-xkq6k [1.345221566s]
Jun 11 12:08:57.876: INFO: Created: latency-svc-bjc6v
Jun 11 12:08:57.930: INFO: Got endpoints: latency-svc-bjc6v [1.384944995s]
Jun 11 12:08:57.962: INFO: Created: latency-svc-97gkr
Jun 11 12:08:57.990: INFO: Got endpoints: latency-svc-97gkr [1.403235266s]
Jun 11 12:08:58.020: INFO: Created: latency-svc-wjzj2
Jun 11 12:08:58.072: INFO: Got endpoints: latency-svc-wjzj2 [1.454821142s]
Jun 11 12:08:58.094: INFO: Created: latency-svc-56sjw
Jun 11 12:08:58.122: INFO: Got endpoints: latency-svc-56sjw [1.399633669s]
Jun 11 12:08:58.166: INFO: Created: latency-svc-cqskz
Jun 11 12:08:58.294: INFO: Got endpoints: latency-svc-cqskz [1.470451961s]
Jun 11 12:08:58.344: INFO: Created: latency-svc-qqrln
Jun 11 12:08:58.375: INFO: Got endpoints: latency-svc-qqrln [1.48386155s]
Jun 11 12:08:58.439: INFO: Created: latency-svc-87vcm
Jun 11 12:08:58.452: INFO: Got endpoints: latency-svc-87vcm [1.393115238s]
Jun 11 12:08:58.477: INFO: Created: latency-svc-9ss8t
Jun 11 12:08:58.501: INFO: Got endpoints: latency-svc-9ss8t [1.40475693s]
Jun 11 12:08:58.526: INFO: Created: latency-svc-qvvhn
Jun 11 12:08:58.599: INFO: Got endpoints: latency-svc-qvvhn [1.365849136s]
Jun 11 12:08:58.607: INFO: Created: latency-svc-75h6l
Jun 11 12:08:58.668: INFO: Got endpoints: latency-svc-75h6l [1.28226631s]
Jun 11 12:08:58.692: INFO: Created: latency-svc-mcrl9
Jun 11 12:08:59.193: INFO: Got endpoints: latency-svc-mcrl9 [1.741882402s]
Jun 11 12:08:59.216: INFO: Created: latency-svc-drswt
Jun 11 12:08:59.290: INFO: Got endpoints: latency-svc-drswt [1.76788244s]
Jun 11 12:08:59.451: INFO: Created: latency-svc-4f4xj
Jun 11 12:08:59.630: INFO: Got endpoints: latency-svc-4f4xj [2.040146281s]
Jun 11 12:08:59.633: INFO: Created: latency-svc-6zr6m
Jun 11 12:08:59.676: INFO: Got endpoints: latency-svc-6zr6m [1.997931818s]
Jun 11 12:08:59.791: INFO: Created: latency-svc-jnnmw
Jun 11 12:09:00.699: INFO: Got endpoints: latency-svc-jnnmw [2.90504707s]
Jun 11 12:09:00.702: INFO: Created: latency-svc-f5q6p
Jun 11 12:09:00.723: INFO: Got endpoints: latency-svc-f5q6p [2.793157711s]
Jun 11 12:09:00.750: INFO: Created: latency-svc-hfl9p
Jun 11 12:09:00.755: INFO: Got endpoints: latency-svc-hfl9p [2.765062557s]
Jun 11 12:09:01.170: INFO: Created: latency-svc-lcvfl
Jun 11 12:09:01.187: INFO: Got endpoints: latency-svc-lcvfl [3.114609675s]
Jun 11 12:09:01.220: INFO: Created: latency-svc-dskj2
Jun 11 12:09:01.236: INFO: Got endpoints: latency-svc-dskj2 [3.113560503s]
Jun 11 12:09:01.261: INFO: Created: latency-svc-frxjs
Jun 11 12:09:01.301: INFO: Got endpoints: latency-svc-frxjs [3.006075837s]
Jun 11 12:09:01.334: INFO: Created: latency-svc-mbtms
Jun 11 12:09:01.376: INFO: Got endpoints: latency-svc-mbtms [3.001297362s]
Jun 11 12:09:01.435: INFO: Created: latency-svc-ztsms
Jun 11 12:09:01.470: INFO: Got endpoints: latency-svc-ztsms [3.017894701s]
Jun 11 12:09:01.508: INFO: Created: latency-svc-74mxs
Jun 11 12:09:01.595: INFO: Got endpoints: latency-svc-74mxs [3.093625694s]
Jun 11 12:09:01.597: INFO: Created: latency-svc-2ggqq
Jun 11 12:09:01.614: INFO: Got endpoints: latency-svc-2ggqq [3.014870999s]
Jun 11 12:09:01.639: INFO: Created: latency-svc-rp2x9
Jun 11 12:09:01.664: INFO: Got endpoints: latency-svc-rp2x9 [2.995468894s]
Jun 11 12:09:01.749: INFO: Created: latency-svc-8r5tq
Jun 11 12:09:01.755: INFO: Got endpoints: latency-svc-8r5tq [2.561853416s]
Jun 11 12:09:01.801: INFO: Created: latency-svc-bc6sh
Jun 11 12:09:01.825: INFO: Got endpoints: latency-svc-bc6sh [2.534657586s]
Jun 11 12:09:02.062: INFO: Created: latency-svc-ppzg6
Jun 11 12:09:02.085: INFO: Got endpoints: latency-svc-ppzg6 [2.454717763s]
Jun 11 12:09:02.192: INFO: Created: latency-svc-5w88g
Jun 11 12:09:02.235: INFO: Got endpoints: latency-svc-5w88g [2.559309088s]
Jun 11 12:09:02.236: INFO: Created: latency-svc-68v99
Jun 11 12:09:02.250: INFO: Got endpoints: latency-svc-68v99 [1.551094163s]
Jun 11 12:09:02.275: INFO: Created: latency-svc-pmgr7
Jun 11 12:09:02.325: INFO: Got endpoints: latency-svc-pmgr7 [1.601671561s]
Jun 11 12:09:02.348: INFO: Created: latency-svc-9fqh6
Jun 11 12:09:02.360: INFO: Got endpoints: latency-svc-9fqh6 [1.604547398s]
Jun 11 12:09:02.379: INFO: Created: latency-svc-d2gc2
Jun 11 12:09:02.390: INFO: Got endpoints: latency-svc-d2gc2 [1.203491629s]
Jun 11 12:09:02.407: INFO: Created: latency-svc-g4hdn
Jun 11 12:09:02.456: INFO: Got endpoints: latency-svc-g4hdn [1.219996856s]
Jun 11 12:09:02.458: INFO: Created: latency-svc-2lm7g
Jun 11 12:09:02.468: INFO: Got endpoints: latency-svc-2lm7g [1.167819273s]
Jun 11 12:09:02.491: INFO: Created: latency-svc-lvd9n
Jun 11 12:09:02.505: INFO: Got endpoints: latency-svc-lvd9n [1.129406505s]
Jun 11 12:09:02.523: INFO: Created: latency-svc-pxs6j
Jun 11 12:09:02.536: INFO: Got endpoints: latency-svc-pxs6j [1.06537487s]
Jun 11 12:09:02.553: INFO: Created: latency-svc-hqhww
Jun 11 12:09:02.587: INFO: Got endpoints: latency-svc-hqhww [992.209113ms]
Jun 11 12:09:02.598: INFO: Created: latency-svc-q9kdv
Jun 11 12:09:02.614: INFO: Got endpoints: latency-svc-q9kdv [999.407464ms]
Jun 11 12:09:02.635: INFO: Created: latency-svc-zmtqt
Jun 11 12:09:02.650: INFO: Got endpoints: latency-svc-zmtqt [986.356131ms]
Jun 11 12:09:02.671: INFO: Created: latency-svc-lp4zp
Jun 11 12:09:02.687: INFO: Got endpoints: latency-svc-lp4zp [931.956867ms]
Jun 11 12:09:02.731: INFO: Created: latency-svc-ns6n5
Jun 11 12:09:02.741: INFO: Got endpoints: latency-svc-ns6n5 [915.951975ms]
Jun 11 12:09:02.775: INFO: Created: latency-svc-k4t6w
Jun 11 12:09:02.791: INFO: Got endpoints: latency-svc-k4t6w [705.923867ms]
Jun 11 12:09:02.808: INFO: Created: latency-svc-8lhs2
Jun 11 12:09:02.819: INFO: Got endpoints: latency-svc-8lhs2 [583.985258ms]
Jun 11 12:09:02.869: INFO: Created: latency-svc-k7b8g
Jun 11 12:09:02.872: INFO: Got endpoints: latency-svc-k7b8g [622.258057ms]
Jun 11 12:09:02.912: INFO: Created: latency-svc-b9bm4
Jun 11 12:09:02.928: INFO: Got endpoints: latency-svc-b9bm4 [603.663285ms]
Jun 11 12:09:02.961: INFO: Created: latency-svc-pg5rw
Jun 11 12:09:03.025: INFO: Got endpoints: latency-svc-pg5rw [665.2566ms]
Jun 11 12:09:03.037: INFO: Created: latency-svc-glh6t
Jun 11 12:09:03.068: INFO: Got endpoints: latency-svc-glh6t [677.313879ms]
Jun 11 12:09:03.092: INFO: Created: latency-svc-626z8
Jun 11 12:09:03.103: INFO: Got endpoints: latency-svc-626z8 [647.363761ms]
Jun 11 12:09:03.123: INFO: Created: latency-svc-g84l5
Jun 11 12:09:03.163: INFO: Got endpoints: latency-svc-g84l5 [694.096326ms]
Jun 11 12:09:03.183: INFO: Created: latency-svc-kfw5q
Jun 11 12:09:03.200: INFO: Got endpoints: latency-svc-kfw5q [694.874635ms]
Jun 11 12:09:03.216: INFO: Created: latency-svc-5rj5s
Jun 11 12:09:03.230: INFO: Got endpoints: latency-svc-5rj5s [694.668265ms]
Jun 11 12:09:03.253: INFO: Created: latency-svc-g45fm
Jun 11 12:09:03.300: INFO: Got endpoints: latency-svc-g45fm [713.206284ms]
Jun 11 12:09:03.334: INFO: Created: latency-svc-llxrz
Jun 11 12:09:03.369: INFO: Got endpoints: latency-svc-llxrz [754.779053ms]
Jun 11 12:09:03.458: INFO: Created: latency-svc-sgt2v
Jun 11 12:09:03.470: INFO: Got endpoints: latency-svc-sgt2v [819.782784ms]
Jun 11 12:09:03.519: INFO: Created: latency-svc-pqr7f
Jun 11 12:09:03.537: INFO: Got endpoints: latency-svc-pqr7f [850.288329ms]
Jun 11 12:09:03.594: INFO: Created: latency-svc-vwxsm
Jun 11 12:09:03.621: INFO: Got endpoints: latency-svc-vwxsm [880.477822ms]
Jun 11 12:09:03.655: INFO: Created: latency-svc-sk29f
Jun 11 12:09:03.669: INFO: Got endpoints: latency-svc-sk29f [878.753416ms]
Jun 11 12:09:03.690: INFO: Created: latency-svc-dq2tg
Jun 11 12:09:03.725: INFO: Got endpoints: latency-svc-dq2tg [905.958275ms]
Jun 11 12:09:03.738: INFO: Created: latency-svc-q9jwb
Jun 11 12:09:03.754: INFO: Got endpoints: latency-svc-q9jwb [882.08246ms]
Jun 11 12:09:03.775: INFO: Created: latency-svc-2gks7
Jun 11 12:09:03.791: INFO: Got endpoints: latency-svc-2gks7 [862.648583ms]
Jun 11 12:09:03.825: INFO: Created: latency-svc-mntgh
Jun 11 12:09:03.923: INFO: Got endpoints: latency-svc-mntgh [897.742361ms]
Jun 11 12:09:03.926: INFO: Created: latency-svc-zphs9
Jun 11 12:09:03.947: INFO: Got endpoints: latency-svc-zphs9 [879.359461ms]
Jun 11 12:09:03.973: INFO: Created: latency-svc-5mz7r
Jun 11 12:09:03.989: INFO: Got endpoints: latency-svc-5mz7r [885.945343ms]
Jun 11 12:09:04.011: INFO: Created: latency-svc-tn2ll
Jun 11 12:09:04.061: INFO: Got endpoints: latency-svc-tn2ll [898.586416ms]
Jun 11 12:09:04.077: INFO: Created: latency-svc-vs9cv
Jun 11 12:09:04.087: INFO: Got endpoints: latency-svc-vs9cv [886.286706ms]
Jun 11 12:09:04.105: INFO: Created: latency-svc-w6qnw
Jun 11 12:09:04.116: INFO: Got endpoints: latency-svc-w6qnw [885.809353ms]
Jun 11 12:09:04.116: INFO: Latencies: [40.36026ms 88.694135ms 178.364832ms 228.143402ms 324.917901ms 364.725515ms 583.093652ms 583.985258ms 603.663285ms 622.258057ms 647.363761ms 665.2566ms 677.313879ms 686.762111ms 694.096326ms 694.668265ms 694.874635ms 705.923867ms 713.206284ms 726.211389ms 754.779053ms 801.836162ms 803.675437ms 818.922299ms 819.782784ms 820.096595ms 844.979768ms 850.288329ms 854.143344ms 862.648583ms 874.167366ms 874.273018ms 878.753416ms 879.06981ms 879.359461ms 880.007702ms 880.477822ms 882.08246ms 885.809353ms 885.945343ms 886.286706ms 897.742361ms 898.586416ms 899.401758ms 905.958275ms 915.951975ms 919.906768ms 928.657866ms 929.756547ms 931.956867ms 940.790665ms 948.827577ms 957.996226ms 962.78199ms 969.955795ms 970.906158ms 972.646893ms 981.522826ms 982.734294ms 986.356131ms 987.709017ms 988.844922ms 992.209113ms 994.301514ms 999.407464ms 999.904214ms 1.005174639s 1.01651494s 1.017471353s 1.018065298s 1.021667063s 1.053102426s 1.053542161s 1.056196457s 1.057116108s 1.06537487s 1.07118372s 1.080508009s 1.081932775s 1.084458088s 1.085142016s 1.086033279s 1.086504002s 1.087539293s 1.08922729s 1.10185468s 1.107137061s 1.107610209s 1.113772819s 1.114162072s 1.120668527s 1.128135627s 1.129406505s 1.131469507s 1.13218199s 1.1363357s 1.137716935s 1.137992632s 1.147148958s 1.151756073s 1.155019344s 1.156529766s 1.161424243s 1.162013499s 1.162665838s 1.167182008s 1.167819273s 1.167856175s 1.169872265s 1.171344881s 1.180185676s 1.1839193s 1.19881714s 1.200515105s 1.203491629s 1.203545614s 1.208268029s 1.208633113s 1.211867223s 1.214452996s 1.214676972s 1.216764155s 1.219996856s 1.220025853s 1.22208423s 1.22668983s 1.24613271s 1.264382349s 1.264905613s 1.272730944s 1.279126791s 1.28226631s 1.298950558s 1.345221566s 1.365849136s 1.384944995s 1.387471976s 1.393115238s 1.399633669s 1.403235266s 1.40475693s 1.454821142s 1.466427462s 1.470451961s 1.48386155s 1.518355005s 1.551094163s 1.601671561s 1.604547398s 1.658649495s 1.741882402s 1.75892549s 1.76788244s 1.790742031s 1.885618121s 1.997931818s 2.040146281s 2.288386666s 2.340596342s 2.454717763s 2.469983718s 2.534657586s 2.559309088s 2.561853416s 2.597895256s 2.765062557s 2.791088226s 2.793157711s 2.90504707s 2.964115171s 2.995468894s 3.001297362s 3.006075837s 3.014870999s 3.017894701s 3.093625694s 3.095747302s 3.113560503s 3.114609675s 3.412344109s 3.429629347s 4.016301626s 4.016387219s 4.365592761s 4.409693745s 5.115743698s 5.203627305s 5.569043085s 5.618983551s 6.184223585s 6.186788997s 6.531168138s 6.640245877s 6.797025641s 6.852773547s 6.872630234s 7.030985271s 7.064546053s 7.069181232s 7.274412575s]
Jun 11 12:09:04.116: INFO: 50 %ile: 1.155019344s
Jun 11 12:09:04.117: INFO: 90 %ile: 3.429629347s
Jun 11 12:09:04.117: INFO: 99 %ile: 7.069181232s
Jun 11 12:09:04.117: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:09:04.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7387" for this suite.

• [SLOW TEST:28.017 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":245,"skipped":4179,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:09:04.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 12:09:04.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6" in namespace "downward-api-6348" to be "Succeeded or Failed"
Jun 11 12:09:04.254: INFO: Pod "downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.733545ms
Jun 11 12:09:06.258: INFO: Pod "downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013977506s
Jun 11 12:09:08.261: INFO: Pod "downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017188431s
STEP: Saw pod success
Jun 11 12:09:08.261: INFO: Pod "downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6" satisfied condition "Succeeded or Failed"
Jun 11 12:09:08.264: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6 container client-container: 
STEP: delete the pod
Jun 11 12:09:08.350: INFO: Waiting for pod downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6 to disappear
Jun 11 12:09:08.367: INFO: Pod downwardapi-volume-3930dff2-cad1-4bf2-91dd-4975211894a6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:09:08.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6348" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4184,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:09:08.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-19bf7f0f-240b-41ea-98c3-01f2e3fc809d
STEP: Creating a pod to test consume configMaps
Jun 11 12:09:08.435: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc" in namespace "projected-1035" to be "Succeeded or Failed"
Jun 11 12:09:08.516: INFO: Pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc": Phase="Pending", Reason="", readiness=false. Elapsed: 81.263305ms
Jun 11 12:09:10.576: INFO: Pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140773175s
Jun 11 12:09:12.618: INFO: Pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183028516s
Jun 11 12:09:14.654: INFO: Pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21860461s
STEP: Saw pod success
Jun 11 12:09:14.654: INFO: Pod "pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc" satisfied condition "Succeeded or Failed"
Jun 11 12:09:14.656: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc container projected-configmap-volume-test: 
STEP: delete the pod
Jun 11 12:09:14.839: INFO: Waiting for pod pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc to disappear
Jun 11 12:09:14.842: INFO: Pod pod-projected-configmaps-32fca13c-c42b-4338-a461-d48d7dd2ecbc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:09:14.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1035" for this suite.

• [SLOW TEST:6.480 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4187,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:09:14.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 12:09:14.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362" in namespace "projected-8133" to be "Succeeded or Failed"
Jun 11 12:09:14.998: INFO: Pod "downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362": Phase="Pending", Reason="", readiness=false. Elapsed: 18.08704ms
Jun 11 12:09:17.882: INFO: Pod "downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902400522s
Jun 11 12:09:19.971: INFO: Pod "downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.991711278s
STEP: Saw pod success
Jun 11 12:09:19.971: INFO: Pod "downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362" satisfied condition "Succeeded or Failed"
Jun 11 12:09:20.226: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362 container client-container: 
STEP: delete the pod
Jun 11 12:09:20.334: INFO: Waiting for pod downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362 to disappear
Jun 11 12:09:20.367: INFO: Pod downwardapi-volume-90c6a6b1-862c-4cfc-8471-a8214ae7e362 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:09:20.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8133" for this suite.

• [SLOW TEST:5.888 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4213,"failed":0}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:09:20.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:09:23.129: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jun 11 12:09:26.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474163, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474163, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474163, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474162, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:09:29.096: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:09:29.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:09:30.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6726" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:10.155 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":249,"skipped":4213,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:09:30.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-af1b2112-8ef1-4eea-8411-9a9c0034ba71 in namespace container-probe-7613
Jun 11 12:09:35.067: INFO: Started pod liveness-af1b2112-8ef1-4eea-8411-9a9c0034ba71 in namespace container-probe-7613
STEP: checking the pod's current state and verifying that restartCount is present
Jun 11 12:09:35.109: INFO: Initial restart count of pod liveness-af1b2112-8ef1-4eea-8411-9a9c0034ba71 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:13:35.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7613" for this suite.

• [SLOW TEST:244.712 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4218,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:13:35.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:13:36.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jun 11 12:13:39.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6598 create -f -'
Jun 11 12:13:39.684: INFO: stderr: ""
Jun 11 12:13:39.684: INFO: stdout: "e2e-test-crd-publish-openapi-1733-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jun 11 12:13:39.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6598 delete e2e-test-crd-publish-openapi-1733-crds test-cr'
Jun 11 12:13:39.821: INFO: stderr: ""
Jun 11 12:13:39.821: INFO: stdout: "e2e-test-crd-publish-openapi-1733-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jun 11 12:13:39.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6598 apply -f -'
Jun 11 12:13:40.107: INFO: stderr: ""
Jun 11 12:13:40.107: INFO: stdout: "e2e-test-crd-publish-openapi-1733-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jun 11 12:13:40.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6598 delete e2e-test-crd-publish-openapi-1733-crds test-cr'
Jun 11 12:13:40.435: INFO: stderr: ""
Jun 11 12:13:40.435: INFO: stdout: "e2e-test-crd-publish-openapi-1733-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jun 11 12:13:40.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1733-crds'
Jun 11 12:13:40.912: INFO: stderr: ""
Jun 11 12:13:40.912: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1733-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:13:44.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6598" for this suite.

• [SLOW TEST:8.462 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":251,"skipped":4223,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:13:44.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:13:44.185: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ccdef232-94f8-4cdd-9f58-f098b5e6ba3a" in namespace "security-context-test-8247" to be "Succeeded or Failed"
Jun 11 12:13:44.189: INFO: Pod "busybox-readonly-false-ccdef232-94f8-4cdd-9f58-f098b5e6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794703ms
Jun 11 12:13:46.193: INFO: Pod "busybox-readonly-false-ccdef232-94f8-4cdd-9f58-f098b5e6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008163328s
Jun 11 12:13:48.198: INFO: Pod "busybox-readonly-false-ccdef232-94f8-4cdd-9f58-f098b5e6ba3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013111024s
Jun 11 12:13:48.198: INFO: Pod "busybox-readonly-false-ccdef232-94f8-4cdd-9f58-f098b5e6ba3a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:13:48.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8247" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4277,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:13:48.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-c1c515cc-1d27-413a-bed1-ab428c5a6300 in namespace container-probe-5689
Jun 11 12:13:52.313: INFO: Started pod liveness-c1c515cc-1d27-413a-bed1-ab428c5a6300 in namespace container-probe-5689
STEP: checking the pod's current state and verifying that restartCount is present
Jun 11 12:13:52.316: INFO: Initial restart count of pod liveness-c1c515cc-1d27-413a-bed1-ab428c5a6300 is 0
Jun 11 12:14:14.365: INFO: Restart count of pod container-probe-5689/liveness-c1c515cc-1d27-413a-bed1-ab428c5a6300 is now 1 (22.049215661s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:14:14.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5689" for this suite.

• [SLOW TEST:26.211 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4281,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:14:14.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 12:14:14.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd" in namespace "downward-api-5913" to be "Succeeded or Failed"
Jun 11 12:14:14.921: INFO: Pod "downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd": Phase="Pending", Reason="", readiness=false. Elapsed: 64.983839ms
Jun 11 12:14:16.926: INFO: Pod "downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069370913s
Jun 11 12:14:18.930: INFO: Pod "downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073509764s
STEP: Saw pod success
Jun 11 12:14:18.930: INFO: Pod "downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd" satisfied condition "Succeeded or Failed"
Jun 11 12:14:18.932: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd container client-container: 
STEP: delete the pod
Jun 11 12:14:18.978: INFO: Waiting for pod downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd to disappear
Jun 11 12:14:18.993: INFO: Pod downwardapi-volume-004b1757-268f-41b2-b0f5-32652760adbd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:14:18.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5913" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4288,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:14:19.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jun 11 12:14:19.095: INFO: Waiting up to 5m0s for pod "downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f" in namespace "downward-api-334" to be "Succeeded or Failed"
Jun 11 12:14:19.101: INFO: Pod "downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.677779ms
Jun 11 12:14:21.104: INFO: Pod "downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009428458s
Jun 11 12:14:23.114: INFO: Pod "downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018666706s
STEP: Saw pod success
Jun 11 12:14:23.114: INFO: Pod "downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f" satisfied condition "Succeeded or Failed"
Jun 11 12:14:23.116: INFO: Trying to get logs from node kali-worker pod downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f container dapi-container: 
STEP: delete the pod
Jun 11 12:14:23.177: INFO: Waiting for pod downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f to disappear
Jun 11 12:14:23.185: INFO: Pod downward-api-e276696f-19bb-40bf-a4a6-a81433f9426f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:14:23.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-334" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4307,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:14:23.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-lbrp
STEP: Creating a pod to test atomic-volume-subpath
Jun 11 12:14:23.319: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lbrp" in namespace "subpath-9159" to be "Succeeded or Failed"
Jun 11 12:14:23.338: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.704092ms
Jun 11 12:14:25.342: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022999174s
Jun 11 12:14:27.346: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 4.027292858s
Jun 11 12:14:29.632: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 6.312998789s
Jun 11 12:14:31.636: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 8.316617602s
Jun 11 12:14:33.640: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 10.321409554s
Jun 11 12:14:35.645: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 12.325994023s
Jun 11 12:14:37.648: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 14.329513727s
Jun 11 12:14:39.652: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 16.332912113s
Jun 11 12:14:41.656: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 18.337252125s
Jun 11 12:14:43.661: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 20.341844173s
Jun 11 12:14:45.665: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Running", Reason="", readiness=true. Elapsed: 22.346177673s
Jun 11 12:14:47.670: INFO: Pod "pod-subpath-test-configmap-lbrp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.351039732s
STEP: Saw pod success
Jun 11 12:14:47.670: INFO: Pod "pod-subpath-test-configmap-lbrp" satisfied condition "Succeeded or Failed"
Jun 11 12:14:47.674: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-lbrp container test-container-subpath-configmap-lbrp: 
STEP: delete the pod
Jun 11 12:14:47.729: INFO: Waiting for pod pod-subpath-test-configmap-lbrp to disappear
Jun 11 12:14:47.737: INFO: Pod pod-subpath-test-configmap-lbrp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lbrp
Jun 11 12:14:47.737: INFO: Deleting pod "pod-subpath-test-configmap-lbrp" in namespace "subpath-9159"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:14:47.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9159" for this suite.

• [SLOW TEST:24.558 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":256,"skipped":4325,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:14:47.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jun 11 12:14:48.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 11 12:14:50.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474488, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474488, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474488, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727474488, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 11 12:14:53.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jun 11 12:14:57.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-8036 to-be-attached-pod -i -c=container1'
Jun 11 12:14:57.712: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:14:57.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8036" for this suite.
STEP: Destroying namespace "webhook-8036-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.146 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":257,"skipped":4331,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:14:57.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7253, will wait for the garbage collector to delete the pods
Jun 11 12:15:04.108: INFO: Deleting Job.batch foo took: 66.482363ms
Jun 11 12:15:04.208: INFO: Terminating Job.batch foo pods took: 100.230298ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:15:43.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7253" for this suite.

• [SLOW TEST:46.022 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":258,"skipped":4339,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:15:43.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6579
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jun 11 12:15:44.172: INFO: Found 0 stateful pods, waiting for 3
Jun 11 12:15:54.200: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 12:15:54.200: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 12:15:54.200: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jun 11 12:16:04.185: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 12:16:04.185: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 12:16:04.185: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jun 11 12:16:04.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6579 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 12:16:04.497: INFO: stderr: "I0611 12:16:04.328462    3575 log.go:172] (0xc0000e1130) (0xc0007f21e0) Create stream\nI0611 12:16:04.328531    3575 log.go:172] (0xc0000e1130) (0xc0007f21e0) Stream added, broadcasting: 1\nI0611 12:16:04.343264    3575 log.go:172] (0xc0000e1130) Reply frame received for 1\nI0611 12:16:04.343392    3575 log.go:172] (0xc0000e1130) (0xc0007f2280) Create stream\nI0611 12:16:04.343439    3575 log.go:172] (0xc0000e1130) (0xc0007f2280) Stream added, broadcasting: 3\nI0611 12:16:04.344677    3575 log.go:172] (0xc0000e1130) Reply frame received for 3\nI0611 12:16:04.344736    3575 log.go:172] (0xc0000e1130) (0xc000661220) Create stream\nI0611 12:16:04.344760    3575 log.go:172] (0xc0000e1130) (0xc000661220) Stream added, broadcasting: 5\nI0611 12:16:04.346108    3575 log.go:172] (0xc0000e1130) Reply frame received for 5\nI0611 12:16:04.441876    3575 log.go:172] (0xc0000e1130) Data frame received for 5\nI0611 12:16:04.441905    3575 log.go:172] (0xc000661220) (5) Data frame handling\nI0611 12:16:04.441926    3575 log.go:172] (0xc000661220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 12:16:04.488085    3575 log.go:172] (0xc0000e1130) Data frame received for 3\nI0611 12:16:04.488147    3575 log.go:172] (0xc0007f2280) (3) Data frame handling\nI0611 12:16:04.488183    3575 log.go:172] (0xc0000e1130) Data frame received for 5\nI0611 12:16:04.488213    3575 log.go:172] (0xc000661220) (5) Data frame handling\nI0611 12:16:04.488233    3575 log.go:172] (0xc0007f2280) (3) Data frame sent\nI0611 12:16:04.488251    3575 log.go:172] (0xc0000e1130) Data frame received for 3\nI0611 12:16:04.488270    3575 log.go:172] (0xc0007f2280) (3) Data frame handling\nI0611 12:16:04.491062    3575 log.go:172] (0xc0000e1130) Data frame received for 1\nI0611 12:16:04.491091    3575 log.go:172] (0xc0007f21e0) (1) Data frame handling\nI0611 12:16:04.491109    3575 log.go:172] (0xc0007f21e0) (1) Data frame sent\nI0611 12:16:04.491127    3575 log.go:172] (0xc0000e1130) (0xc0007f21e0) Stream removed, broadcasting: 1\nI0611 12:16:04.491367    3575 log.go:172] (0xc0000e1130) Go away received\nI0611 12:16:04.491538    3575 log.go:172] (0xc0000e1130) (0xc0007f21e0) Stream removed, broadcasting: 1\nI0611 12:16:04.491574    3575 log.go:172] (0xc0000e1130) (0xc0007f2280) Stream removed, broadcasting: 3\nI0611 12:16:04.491594    3575 log.go:172] (0xc0000e1130) (0xc000661220) Stream removed, broadcasting: 5\n"
Jun 11 12:16:04.498: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 12:16:04.498: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jun 11 12:16:14.531: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jun 11 12:16:24.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6579 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 12:16:24.829: INFO: stderr: "I0611 12:16:24.717039    3597 log.go:172] (0xc0000ead10) (0xc0007d15e0) Create stream\nI0611 12:16:24.717104    3597 log.go:172] (0xc0000ead10) (0xc0007d15e0) Stream added, broadcasting: 1\nI0611 12:16:24.720274    3597 log.go:172] (0xc0000ead10) Reply frame received for 1\nI0611 12:16:24.720324    3597 log.go:172] (0xc0000ead10) (0xc0003fc000) Create stream\nI0611 12:16:24.720345    3597 log.go:172] (0xc0000ead10) (0xc0003fc000) Stream added, broadcasting: 3\nI0611 12:16:24.721422    3597 log.go:172] (0xc0000ead10) Reply frame received for 3\nI0611 12:16:24.721462    3597 log.go:172] (0xc0000ead10) (0xc0003fe000) Create stream\nI0611 12:16:24.721482    3597 log.go:172] (0xc0000ead10) (0xc0003fe000) Stream added, broadcasting: 5\nI0611 12:16:24.722647    3597 log.go:172] (0xc0000ead10) Reply frame received for 5\nI0611 12:16:24.820922    3597 log.go:172] (0xc0000ead10) Data frame received for 5\nI0611 12:16:24.820953    3597 log.go:172] (0xc0003fe000) (5) Data frame handling\nI0611 12:16:24.820961    3597 log.go:172] (0xc0003fe000) (5) Data frame sent\nI0611 12:16:24.820967    3597 log.go:172] (0xc0000ead10) Data frame received for 5\nI0611 12:16:24.820972    3597 log.go:172] (0xc0003fe000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 12:16:24.820993    3597 log.go:172] (0xc0000ead10) Data frame received for 3\nI0611 12:16:24.821014    3597 log.go:172] (0xc0003fc000) (3) Data frame handling\nI0611 12:16:24.821035    3597 log.go:172] (0xc0003fc000) (3) Data frame sent\nI0611 12:16:24.821055    3597 log.go:172] (0xc0000ead10) Data frame received for 3\nI0611 12:16:24.821071    3597 log.go:172] (0xc0003fc000) (3) Data frame handling\nI0611 12:16:24.822408    3597 log.go:172] (0xc0000ead10) Data frame received for 1\nI0611 12:16:24.822438    3597 log.go:172] (0xc0007d15e0) (1) Data frame handling\nI0611 12:16:24.822451    3597 log.go:172] (0xc0007d15e0) (1) Data frame sent\nI0611 12:16:24.822465    3597 log.go:172] (0xc0000ead10) (0xc0007d15e0) Stream removed, broadcasting: 1\nI0611 12:16:24.822482    3597 log.go:172] (0xc0000ead10) Go away received\nI0611 12:16:24.822815    3597 log.go:172] (0xc0000ead10) (0xc0007d15e0) Stream removed, broadcasting: 1\nI0611 12:16:24.822829    3597 log.go:172] (0xc0000ead10) (0xc0003fc000) Stream removed, broadcasting: 3\nI0611 12:16:24.822836    3597 log.go:172] (0xc0000ead10) (0xc0003fe000) Stream removed, broadcasting: 5\n"
Jun 11 12:16:24.829: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 12:16:24.829: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 12:16:34.847: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
Jun 11 12:16:34.847: INFO: Waiting for Pod statefulset-6579/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jun 11 12:16:34.847: INFO: Waiting for Pod statefulset-6579/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jun 11 12:16:44.854: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
Jun 11 12:16:44.854: INFO: Waiting for Pod statefulset-6579/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jun 11 12:16:54.855: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
STEP: Rolling back to a previous revision
Jun 11 12:17:04.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6579 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jun 11 12:17:07.660: INFO: stderr: "I0611 12:17:07.542204    3617 log.go:172] (0xc0000e6420) (0xc000d26000) Create stream\nI0611 12:17:07.542253    3617 log.go:172] (0xc0000e6420) (0xc000d26000) Stream added, broadcasting: 1\nI0611 12:17:07.545244    3617 log.go:172] (0xc0000e6420) Reply frame received for 1\nI0611 12:17:07.545287    3617 log.go:172] (0xc0000e6420) (0xc00024c000) Create stream\nI0611 12:17:07.545294    3617 log.go:172] (0xc0000e6420) (0xc00024c000) Stream added, broadcasting: 3\nI0611 12:17:07.546184    3617 log.go:172] (0xc0000e6420) Reply frame received for 3\nI0611 12:17:07.546204    3617 log.go:172] (0xc0000e6420) (0xc00024c0a0) Create stream\nI0611 12:17:07.546210    3617 log.go:172] (0xc0000e6420) (0xc00024c0a0) Stream added, broadcasting: 5\nI0611 12:17:07.547149    3617 log.go:172] (0xc0000e6420) Reply frame received for 5\nI0611 12:17:07.604640    3617 log.go:172] (0xc0000e6420) Data frame received for 5\nI0611 12:17:07.604677    3617 log.go:172] (0xc00024c0a0) (5) Data frame handling\nI0611 12:17:07.604708    3617 log.go:172] (0xc00024c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0611 12:17:07.651708    3617 log.go:172] (0xc0000e6420) Data frame received for 3\nI0611 12:17:07.651731    3617 log.go:172] (0xc00024c000) (3) Data frame handling\nI0611 12:17:07.651747    3617 log.go:172] (0xc00024c000) (3) Data frame sent\nI0611 12:17:07.651756    3617 log.go:172] (0xc0000e6420) Data frame received for 3\nI0611 12:17:07.651761    3617 log.go:172] (0xc00024c000) (3) Data frame handling\nI0611 12:17:07.651790    3617 log.go:172] (0xc0000e6420) Data frame received for 5\nI0611 12:17:07.651802    3617 log.go:172] (0xc00024c0a0) (5) Data frame handling\nI0611 12:17:07.653335    3617 log.go:172] (0xc0000e6420) Data frame received for 1\nI0611 12:17:07.653358    3617 log.go:172] (0xc000d26000) (1) Data frame handling\nI0611 12:17:07.653371    3617 log.go:172] (0xc000d26000) (1) Data frame sent\nI0611 12:17:07.653385    3617 log.go:172] (0xc0000e6420) (0xc000d26000) Stream removed, broadcasting: 1\nI0611 12:17:07.653494    3617 log.go:172] (0xc0000e6420) Go away received\nI0611 12:17:07.653663    3617 log.go:172] (0xc0000e6420) (0xc000d26000) Stream removed, broadcasting: 1\nI0611 12:17:07.653676    3617 log.go:172] (0xc0000e6420) (0xc00024c000) Stream removed, broadcasting: 3\nI0611 12:17:07.653683    3617 log.go:172] (0xc0000e6420) (0xc00024c0a0) Stream removed, broadcasting: 5\n"
Jun 11 12:17:07.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jun 11 12:17:07.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jun 11 12:17:17.694: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jun 11 12:17:27.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6579 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun 11 12:17:27.962: INFO: stderr: "I0611 12:17:27.868780    3652 log.go:172] (0xc0000e8420) (0xc00052c960) Create stream\nI0611 12:17:27.868871    3652 log.go:172] (0xc0000e8420) (0xc00052c960) Stream added, broadcasting: 1\nI0611 12:17:27.871920    3652 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0611 12:17:27.871974    3652 log.go:172] (0xc0000e8420) (0xc0007b5180) Create stream\nI0611 12:17:27.871995    3652 log.go:172] (0xc0000e8420) (0xc0007b5180) Stream added, broadcasting: 3\nI0611 12:17:27.873007    3652 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0611 12:17:27.873051    3652 log.go:172] (0xc0000e8420) (0xc00052ca00) Create stream\nI0611 12:17:27.873073    3652 log.go:172] (0xc0000e8420) (0xc00052ca00) Stream added, broadcasting: 5\nI0611 12:17:27.874273    3652 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0611 12:17:27.948638    3652 log.go:172] (0xc0000e8420) Data frame received for 5\nI0611 12:17:27.948689    3652 log.go:172] (0xc00052ca00) (5) Data frame handling\nI0611 12:17:27.948709    3652 log.go:172] (0xc00052ca00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0611 12:17:27.948733    3652 log.go:172] (0xc0000e8420) Data frame received for 3\nI0611 12:17:27.948747    3652 log.go:172] (0xc0007b5180) (3) Data frame handling\nI0611 12:17:27.948762    3652 log.go:172] (0xc0007b5180) (3) Data frame sent\nI0611 12:17:27.948789    3652 log.go:172] (0xc0000e8420) Data frame received for 5\nI0611 12:17:27.948822    3652 log.go:172] (0xc00052ca00) (5) Data frame handling\nI0611 12:17:27.948848    3652 log.go:172] (0xc0000e8420) Data frame received for 3\nI0611 12:17:27.948861    3652 log.go:172] (0xc0007b5180) (3) Data frame handling\nI0611 12:17:27.950439    3652 log.go:172] (0xc0000e8420) Data frame received for 1\nI0611 12:17:27.950473    3652 log.go:172] (0xc00052c960) (1) Data frame handling\nI0611 12:17:27.950495    3652 log.go:172] (0xc00052c960) (1) Data frame sent\nI0611 12:17:27.950523    3652 log.go:172] (0xc0000e8420) (0xc00052c960) Stream removed, broadcasting: 1\nI0611 12:17:27.950560    3652 log.go:172] (0xc0000e8420) Go away received\nI0611 12:17:27.950902    3652 log.go:172] (0xc0000e8420) (0xc00052c960) Stream removed, broadcasting: 1\nI0611 12:17:27.950920    3652 log.go:172] (0xc0000e8420) (0xc0007b5180) Stream removed, broadcasting: 3\nI0611 12:17:27.950931    3652 log.go:172] (0xc0000e8420) (0xc00052ca00) Stream removed, broadcasting: 5\n"
Jun 11 12:17:27.962: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun 11 12:17:27.962: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun 11 12:17:37.983: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
Jun 11 12:17:37.983: INFO: Waiting for Pod statefulset-6579/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jun 11 12:17:37.983: INFO: Waiting for Pod statefulset-6579/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jun 11 12:17:37.983: INFO: Waiting for Pod statefulset-6579/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jun 11 12:17:47.991: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
Jun 11 12:17:47.991: INFO: Waiting for Pod statefulset-6579/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jun 11 12:17:47.991: INFO: Waiting for Pod statefulset-6579/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jun 11 12:17:57.991: INFO: Waiting for StatefulSet statefulset-6579/ss2 to complete update
Jun 11 12:17:57.991: INFO: Waiting for Pod statefulset-6579/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jun 11 12:18:07.990: INFO: Deleting all statefulset in ns statefulset-6579
Jun 11 12:18:07.992: INFO: Scaling statefulset ss2 to 0
Jun 11 12:18:38.024: INFO: Waiting for statefulset status.replicas updated to 0
Jun 11 12:18:38.028: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6579" for this suite.

• [SLOW TEST:174.144 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":259,"skipped":4342,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:18:38.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jun 11 12:18:38.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6211'
Jun 11 12:18:38.455: INFO: stderr: ""
Jun 11 12:18:38.455: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jun 11 12:18:38.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6211'
Jun 11 12:18:39.437: INFO: stderr: ""
Jun 11 12:18:39.437: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jun 11 12:18:40.760: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:18:40.760: INFO: Found 0 / 1
Jun 11 12:18:41.470: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:18:41.470: INFO: Found 0 / 1
Jun 11 12:18:42.441: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:18:42.441: INFO: Found 1 / 1
Jun 11 12:18:42.441: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jun 11 12:18:42.444: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:18:42.444: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 11 12:18:42.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-c75xp --namespace=kubectl-6211'
Jun 11 12:18:42.575: INFO: stderr: ""
Jun 11 12:18:42.575: INFO: stdout: "Name:         agnhost-master-c75xp\nNamespace:    kubectl-6211\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Thu, 11 Jun 2020 12:18:38 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.61\nIPs:\n  IP:           10.244.1.61\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://c4c6ac45bfcca92473e046ccf42fc2095fc4badf5481440d3b0b3463669379ee\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 11 Jun 2020 12:18:41 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vbzzp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vbzzp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vbzzp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-6211/agnhost-master-c75xp to kali-worker2\n  Normal  Pulled     2s         kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s         kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    1s         kubelet, kali-worker2  Started container agnhost-master\n"
Jun 11 12:18:42.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6211'
Jun 11 12:18:42.696: INFO: stderr: ""
Jun 11 12:18:42.696: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-6211\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-c75xp\n"
Jun 11 12:18:42.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6211'
Jun 11 12:18:42.802: INFO: stderr: ""
Jun 11 12:18:42.802: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-6211\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.110.15.101\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.61:6379\nSession Affinity:  None\nEvents:            \n"
Jun 11 12:18:42.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Jun 11 12:18:42.951: INFO: stderr: ""
Jun 11 12:18:42.951: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Thu, 11 Jun 2020 12:18:36 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 11 Jun 2020 12:13:59 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 11 Jun 2020 12:13:59 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 11 Jun 2020 12:13:59 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 11 Jun 2020 12:13:59 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     43d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     43d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         43d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      43d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         43d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         43d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         43d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         43d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Jun 11 12:18:42.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-6211'
Jun 11 12:18:43.056: INFO: stderr: ""
Jun 11 12:18:43.056: INFO: stdout: "Name:         kubectl-6211\nLabels:       e2e-framework=kubectl\n              e2e-run=c1cb240e-41cb-4926-b280-95071473e345\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:43.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6211" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":260,"skipped":4371,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:18:43.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0611 12:18:44.438754       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 11 12:18:44.438: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:44.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4131" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":261,"skipped":4371,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:18:44.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 11 12:18:44.609: INFO: Waiting up to 5m0s for pod "pod-1826cecf-198d-479f-a094-725b38809910" in namespace "emptydir-2711" to be "Succeeded or Failed"
Jun 11 12:18:44.843: INFO: Pod "pod-1826cecf-198d-479f-a094-725b38809910": Phase="Pending", Reason="", readiness=false. Elapsed: 234.526117ms
Jun 11 12:18:46.856: INFO: Pod "pod-1826cecf-198d-479f-a094-725b38809910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24680759s
Jun 11 12:18:48.916: INFO: Pod "pod-1826cecf-198d-479f-a094-725b38809910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306707349s
STEP: Saw pod success
Jun 11 12:18:48.916: INFO: Pod "pod-1826cecf-198d-479f-a094-725b38809910" satisfied condition "Succeeded or Failed"
Jun 11 12:18:48.919: INFO: Trying to get logs from node kali-worker pod pod-1826cecf-198d-479f-a094-725b38809910 container test-container: 
STEP: delete the pod
Jun 11 12:18:49.073: INFO: Waiting for pod pod-1826cecf-198d-479f-a094-725b38809910 to disappear
Jun 11 12:18:49.098: INFO: Pod pod-1826cecf-198d-479f-a094-725b38809910 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2711" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:18:49.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 12:18:49.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082" in namespace "downward-api-4363" to be "Succeeded or Failed"
Jun 11 12:18:49.324: INFO: Pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082": Phase="Pending", Reason="", readiness=false. Elapsed: 13.620363ms
Jun 11 12:18:51.328: INFO: Pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017678708s
Jun 11 12:18:53.333: INFO: Pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082": Phase="Running", Reason="", readiness=true. Elapsed: 4.022540102s
Jun 11 12:18:55.337: INFO: Pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027413933s
STEP: Saw pod success
Jun 11 12:18:55.338: INFO: Pod "downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082" satisfied condition "Succeeded or Failed"
Jun 11 12:18:55.341: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082 container client-container: 
STEP: delete the pod
Jun 11 12:18:55.396: INFO: Waiting for pod downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082 to disappear
Jun 11 12:18:55.402: INFO: Pod downwardapi-volume-ea3d6aa5-c5e7-43c3-8859-aa5600e41082 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:55.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4363" for this suite.

• [SLOW TEST:6.301 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4513,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:18:55.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:18:59.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8366" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":264,"skipped":4519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:00.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jun 11 12:19:04.992: INFO: Successfully updated pod "annotationupdatefad4cc19-894a-4731-91cb-49e466cd8c4e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:07.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4952" for this suite.

• [SLOW TEST:6.836 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4545,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:07.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Jun 11 12:19:07.076: INFO: Waiting up to 5m0s for pod "var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328" in namespace "var-expansion-6696" to be "Succeeded or Failed"
Jun 11 12:19:07.090: INFO: Pod "var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328": Phase="Pending", Reason="", readiness=false. Elapsed: 13.387739ms
Jun 11 12:19:09.094: INFO: Pod "var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017937806s
Jun 11 12:19:11.099: INFO: Pod "var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022435176s
STEP: Saw pod success
Jun 11 12:19:11.099: INFO: Pod "var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328" satisfied condition "Succeeded or Failed"
Jun 11 12:19:11.102: INFO: Trying to get logs from node kali-worker pod var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328 container dapi-container: 
STEP: delete the pod
Jun 11 12:19:11.158: INFO: Waiting for pod var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328 to disappear
Jun 11 12:19:11.269: INFO: Pod var-expansion-462c38a1-111d-4fcd-b631-e7b068c65328 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:11.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6696" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4578,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:11.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jun 11 12:19:11.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3" in namespace "projected-902" to be "Succeeded or Failed"
Jun 11 12:19:11.446: INFO: Pod "downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.684068ms
Jun 11 12:19:13.471: INFO: Pod "downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050214905s
Jun 11 12:19:15.475: INFO: Pod "downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05376023s
STEP: Saw pod success
Jun 11 12:19:15.475: INFO: Pod "downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3" satisfied condition "Succeeded or Failed"
Jun 11 12:19:15.477: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3 container client-container: 
STEP: delete the pod
Jun 11 12:19:15.510: INFO: Waiting for pod downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3 to disappear
Jun 11 12:19:15.524: INFO: Pod downwardapi-volume-5e4e8a0b-6db7-4ffe-91f6-e06210466ec3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:15.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-902" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4582,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:15.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:26.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-907" for this suite.

• [SLOW TEST:11.145 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":268,"skipped":4592,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:26.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-8560
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8560 to expose endpoints map[]
Jun 11 12:19:26.831: INFO: Get endpoints failed (20.674186ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun 11 12:19:27.834: INFO: successfully validated that service endpoint-test2 in namespace services-8560 exposes endpoints map[] (1.024062929s elapsed)
STEP: Creating pod pod1 in namespace services-8560
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8560 to expose endpoints map[pod1:[80]]
Jun 11 12:19:31.097: INFO: successfully validated that service endpoint-test2 in namespace services-8560 exposes endpoints map[pod1:[80]] (3.256299306s elapsed)
STEP: Creating pod pod2 in namespace services-8560
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8560 to expose endpoints map[pod1:[80] pod2:[80]]
Jun 11 12:19:34.266: INFO: successfully validated that service endpoint-test2 in namespace services-8560 exposes endpoints map[pod1:[80] pod2:[80]] (3.163516451s elapsed)
STEP: Deleting pod pod1 in namespace services-8560
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8560 to expose endpoints map[pod2:[80]]
Jun 11 12:19:35.317: INFO: successfully validated that service endpoint-test2 in namespace services-8560 exposes endpoints map[pod2:[80]] (1.045352733s elapsed)
STEP: Deleting pod pod2 in namespace services-8560
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8560 to expose endpoints map[]
Jun 11 12:19:36.337: INFO: successfully validated that service endpoint-test2 in namespace services-8560 exposes endpoints map[] (1.01498605s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:36.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8560" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.854 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":269,"skipped":4620,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:36.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jun 11 12:19:36.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 11 12:19:36.601: INFO: Waiting for terminating namespaces to be deleted...
Jun 11 12:19:36.603: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jun 11 12:19:36.609: INFO: pod1 from services-8560 started at 2020-06-11 12:19:28 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.609: INFO: 	Container pause ready: false, restart count 0
Jun 11 12:19:36.609: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.609: INFO: 	Container kindnet-cni ready: true, restart count 3
Jun 11 12:19:36.609: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.609: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 11 12:19:36.609: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jun 11 12:19:36.614: INFO: pod2 from services-8560 started at 2020-06-11 12:19:31 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.614: INFO: 	Container pause ready: true, restart count 0
Jun 11 12:19:36.614: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.614: INFO: 	Container kindnet-cni ready: true, restart count 2
Jun 11 12:19:36.614: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Jun 11 12:19:36.614: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-21612ee9-5d33-49db-badc-91f92f80e99f 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-21612ee9-5d33-49db-badc-91f92f80e99f off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-21612ee9-5d33-49db-badc-91f92f80e99f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:52.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-490" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:16.356 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":270,"skipped":4657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:52.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 11 12:19:58.058: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:19:58.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8925" for this suite.

• [SLOW TEST:5.253 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4692,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:19:58.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-0fc4dfc0-2694-4aeb-9d71-ad2bb5496ccf
STEP: Creating a pod to test consume configMaps
Jun 11 12:19:58.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee" in namespace "configmap-2493" to be "Succeeded or Failed"
Jun 11 12:19:58.316: INFO: Pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee": Phase="Pending", Reason="", readiness=false. Elapsed: 47.381539ms
Jun 11 12:20:00.320: INFO: Pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050976351s
Jun 11 12:20:02.336: INFO: Pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee": Phase="Running", Reason="", readiness=true. Elapsed: 4.067156339s
Jun 11 12:20:04.340: INFO: Pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071107258s
STEP: Saw pod success
Jun 11 12:20:04.340: INFO: Pod "pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee" satisfied condition "Succeeded or Failed"
Jun 11 12:20:04.343: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee container configmap-volume-test: 
STEP: delete the pod
Jun 11 12:20:04.762: INFO: Waiting for pod pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee to disappear
Jun 11 12:20:04.784: INFO: Pod pod-configmaps-00adf56f-dd6b-4e59-8eb0-38485d9446ee no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:20:04.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2493" for this suite.

• [SLOW TEST:6.664 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4699,"failed":0}
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:20:04.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:20:23.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2860" for this suite.

• [SLOW TEST:19.082 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":273,"skipped":4699,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:20:23.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-8c890b04-86cc-4052-ac59-c823ff713fac
STEP: Creating a pod to test consume secrets
Jun 11 12:20:23.936: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e" in namespace "projected-7290" to be "Succeeded or Failed"
Jun 11 12:20:23.941: INFO: Pod "pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197391ms
Jun 11 12:20:25.959: INFO: Pod "pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022282781s
Jun 11 12:20:28.043: INFO: Pod "pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106837713s
STEP: Saw pod success
Jun 11 12:20:28.043: INFO: Pod "pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e" satisfied condition "Succeeded or Failed"
Jun 11 12:20:28.046: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e container projected-secret-volume-test: 
STEP: delete the pod
Jun 11 12:20:28.095: INFO: Waiting for pod pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e to disappear
Jun 11 12:20:28.101: INFO: Pod pod-projected-secrets-224e5dc5-ddea-45fc-a9e2-c8c72152d81e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:20:28.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7290" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4705,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jun 11 12:20:28.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jun 11 12:20:28.143: INFO: namespace kubectl-7862
Jun 11 12:20:28.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7862'
Jun 11 12:20:29.002: INFO: stderr: ""
Jun 11 12:20:29.002: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jun 11 12:20:30.006: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:20:30.006: INFO: Found 0 / 1
Jun 11 12:20:31.166: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:20:31.166: INFO: Found 0 / 1
Jun 11 12:20:32.007: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:20:32.007: INFO: Found 0 / 1
Jun 11 12:20:33.006: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:20:33.007: INFO: Found 1 / 1
Jun 11 12:20:33.007: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jun 11 12:20:33.010: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 11 12:20:33.010: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 11 12:20:33.010: INFO: wait on agnhost-master startup in kubectl-7862 
Jun 11 12:20:33.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-9zcg9 agnhost-master --namespace=kubectl-7862'
Jun 11 12:20:33.119: INFO: stderr: ""
Jun 11 12:20:33.119: INFO: stdout: "Paused\n"
STEP: exposing RC
Jun 11 12:20:33.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7862'
Jun 11 12:20:33.302: INFO: stderr: ""
Jun 11 12:20:33.302: INFO: stdout: "service/rm2 exposed\n"
Jun 11 12:20:33.419: INFO: Service rm2 in namespace kubectl-7862 found.
STEP: exposing service
Jun 11 12:20:35.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7862'
Jun 11 12:20:35.566: INFO: stderr: ""
Jun 11 12:20:35.566: INFO: stdout: "service/rm3 exposed\n"
Jun 11 12:20:35.633: INFO: Service rm3 in namespace kubectl-7862 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jun 11 12:20:37.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7862" for this suite.

• [SLOW TEST:9.537 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":275,"skipped":4711,"failed":0}
SSSSSSJun 11 12:20:37.646: INFO: Running AfterSuite actions on all nodes
Jun 11 12:20:37.657: INFO: Running AfterSuite actions on node 1
Jun 11 12:20:37.657: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5185.351 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS