I0308 15:09:32.738682 7 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0308 15:09:32.738865 7 e2e.go:109] Starting e2e run "2ddc46cb-3b0d-4dae-be69-a78d2ea54b5c" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583680171 - Will randomize all specs Will run 280 of 4845 specs Mar 8 15:09:32.824: INFO: >>> kubeConfig: /root/.kube/config Mar 8 15:09:32.827: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 15:09:32.842: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 15:09:32.869: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 15:09:32.869: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 15:09:32.869: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 15:09:32.876: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 15:09:32.876: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 15:09:32.877: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Mar 8 15:09:32.877: INFO: kube-apiserver version: v1.17.0 Mar 8 15:09:32.877: INFO: >>> kubeConfig: /root/.kube/config Mar 8 15:09:32.881: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:32.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Mar 8 15:09:32.930: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:09:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9826" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":17,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:33.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Mar 8 15:09:33.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4191' Mar 8 15:09:35.237: INFO: stderr: "" Mar 8 15:09:35.237: INFO: stdout: "pod/pause created\n" Mar 8 15:09:35.237: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 8 15:09:35.237: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4191" to be "running and ready" Mar 8 15:09:35.242: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835709ms Mar 8 15:09:37.244: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007415466s Mar 8 15:09:39.248: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01117253s Mar 8 15:09:39.248: INFO: Pod "pause" satisfied condition "running and ready" Mar 8 15:09:39.248: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Mar 8 15:09:39.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4191' Mar 8 15:09:39.361: INFO: stderr: "" Mar 8 15:09:39.361: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 8 15:09:39.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4191' Mar 8 15:09:39.472: INFO: stderr: "" Mar 8 15:09:39.472: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 8 15:09:39.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4191' Mar 8 15:09:39.578: INFO: stderr: "" Mar 8 15:09:39.578: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 8 15:09:39.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4191' Mar 8 15:09:39.690: INFO: stderr: "" Mar 8 15:09:39.690: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Mar 8 15:09:39.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4191' Mar 8 15:09:39.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:09:39.820: INFO: stdout: "pod \"pause\" force deleted\n" Mar 8 15:09:39.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4191' Mar 8 15:09:39.916: INFO: stderr: "No resources found in kubectl-4191 namespace.\n" Mar 8 15:09:39.916: INFO: stdout: "" Mar 8 15:09:39.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4191 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:09:39.990: INFO: stderr: "" Mar 8 15:09:39.990: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:09:39.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4191" for this suite. • [SLOW TEST:6.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":2,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:39.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 15:09:40.090: INFO: Waiting up to 5m0s for pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9" in namespace "downward-api-1154" to be "success or failure" Mar 8 15:09:40.105: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.647953ms Mar 8 15:09:42.110: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019328714s Mar 8 15:09:44.113: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022317459s Mar 8 15:09:46.116: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025115082s Mar 8 15:09:48.120: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029163085s STEP: Saw pod success Mar 8 15:09:48.120: INFO: Pod "downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9" satisfied condition "success or failure" Mar 8 15:09:48.122: INFO: Trying to get logs from node latest-worker pod downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9 container dapi-container: STEP: delete the pod Mar 8 15:09:48.190: INFO: Waiting for pod downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9 to disappear Mar 8 15:09:48.196: INFO: Pod downward-api-630f719f-8496-403e-8f6c-e7f65d661dc9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:09:48.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1154" for this suite. • [SLOW TEST:8.208 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":3,"skipped":50,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:48.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 15:09:48.326: INFO: Waiting up to 5m0s for pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9" in namespace "emptydir-9965" to be "success or failure" Mar 8 15:09:48.340: INFO: Pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.504019ms Mar 8 15:09:50.344: INFO: Pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017449973s Mar 8 15:09:52.348: INFO: Pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021453758s Mar 8 15:09:54.352: INFO: Pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02542298s STEP: Saw pod success Mar 8 15:09:54.352: INFO: Pod "pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9" satisfied condition "success or failure" Mar 8 15:09:54.354: INFO: Trying to get logs from node latest-worker pod pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9 container test-container: STEP: delete the pod Mar 8 15:09:54.393: INFO: Waiting for pod pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9 to disappear Mar 8 15:09:54.409: INFO: Pod pod-70cb21bc-45e8-4fd9-b596-1dd06ec148d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:09:54.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9965" for this suite. • [SLOW TEST:6.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":52,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:54.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:09:54.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9" in namespace "projected-1674" to be "success or failure" Mar 8 15:09:54.655: INFO: Pod "downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 165.575346ms Mar 8 15:09:56.680: INFO: Pod "downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190298774s Mar 8 15:09:58.684: INFO: Pod "downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194351155s STEP: Saw pod success Mar 8 15:09:58.684: INFO: Pod "downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9" satisfied condition "success or failure" Mar 8 15:09:58.686: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9 container client-container: STEP: delete the pod Mar 8 15:09:58.724: INFO: Waiting for pod downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9 to disappear Mar 8 15:09:58.734: INFO: Pod downwardapi-volume-88036d32-8223-408f-8353-ca3ef7814fc9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:09:58.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1674" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:09:58.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 8 15:10:04.945: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:04.945: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:04.990875 7 log.go:172] (0xc002844580) (0xc0023f6be0) Create stream I0308 15:10:04.990904 7 log.go:172] (0xc002844580) (0xc0023f6be0) Stream added, broadcasting: 1 I0308 15:10:04.992955 7 log.go:172] (0xc002844580) Reply frame received for 1 I0308 15:10:04.992993 7 log.go:172] (0xc002844580) (0xc002975680) Create stream I0308 15:10:04.993006 7 log.go:172] (0xc002844580) (0xc002975680) Stream added, broadcasting: 3 I0308 15:10:04.994059 7 log.go:172] (0xc002844580) Reply frame received for 3 I0308 15:10:04.994102 7 log.go:172] (0xc002844580) (0xc0027b2fa0) Create stream I0308 15:10:04.994149 7 log.go:172] (0xc002844580) (0xc0027b2fa0) Stream added, broadcasting: 5 I0308 15:10:04.995174 7 log.go:172] (0xc002844580) Reply frame received for 5 I0308 15:10:05.057409 7 log.go:172] (0xc002844580) Data frame received for 5 I0308 15:10:05.057452 7 log.go:172] (0xc0027b2fa0) (5) Data frame handling I0308 15:10:05.057478 7 log.go:172] (0xc002844580) Data frame received for 3 I0308 15:10:05.057485 7 log.go:172] (0xc002975680) (3) Data frame handling I0308 15:10:05.057495 7 log.go:172] (0xc002975680) (3) Data frame sent I0308 15:10:05.057785 7 log.go:172] (0xc002844580) Data frame received for 3 I0308 15:10:05.057809 7 log.go:172] (0xc002975680) (3) Data frame handling I0308 15:10:05.059229 7 log.go:172] (0xc002844580) Data frame received for 1 I0308 15:10:05.059254 7 log.go:172] (0xc0023f6be0) (1) Data frame handling I0308 15:10:05.059271 7 log.go:172] (0xc0023f6be0) (1) Data frame sent I0308 15:10:05.059291 7 log.go:172] (0xc002844580) (0xc0023f6be0) Stream removed, broadcasting: 1 I0308 15:10:05.059311 7 log.go:172] (0xc002844580) Go away received I0308 15:10:05.066198 7 log.go:172] (0xc002844580) (0xc0023f6be0) Stream removed, broadcasting: 1 I0308 15:10:05.066222 7 log.go:172] (0xc002844580) (0xc002975680) Stream removed, broadcasting: 3 I0308 15:10:05.066231 7 log.go:172] (0xc002844580) (0xc0027b2fa0) Stream removed, broadcasting: 5 Mar 8 15:10:05.066: INFO: Exec stderr: "" Mar 8 15:10:05.066: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.066: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.100890 7 log.go:172] (0xc00229d1e0) (0xc0027b3220) Create stream I0308 15:10:05.100914 7 log.go:172] (0xc00229d1e0) (0xc0027b3220) Stream added, broadcasting: 1 I0308 15:10:05.103838 7 log.go:172] (0xc00229d1e0) Reply frame received for 1 I0308 15:10:05.103885 7 log.go:172] (0xc00229d1e0) (0xc0023f6c80) Create stream I0308 15:10:05.103904 7 log.go:172] (0xc00229d1e0) (0xc0023f6c80) Stream added, broadcasting: 3 I0308 15:10:05.105978 7 log.go:172] (0xc00229d1e0) Reply frame received for 3 I0308 15:10:05.106032 7 log.go:172] (0xc00229d1e0) (0xc0022e4000) Create stream I0308 15:10:05.106049 7 log.go:172] (0xc00229d1e0) (0xc0022e4000) Stream added, broadcasting: 5 I0308 15:10:05.107049 7 log.go:172] (0xc00229d1e0) Reply frame received for 5 I0308 15:10:05.161580 7 log.go:172] (0xc00229d1e0) Data frame received for 5 I0308 15:10:05.161633 7 log.go:172] (0xc0022e4000) (5) Data frame handling I0308 15:10:05.161663 7 log.go:172] (0xc00229d1e0) Data frame received for 3 I0308 15:10:05.161677 7 log.go:172] (0xc0023f6c80) (3) Data frame handling I0308 15:10:05.161698 7 log.go:172] (0xc0023f6c80) (3) Data frame sent I0308 15:10:05.161714 7 log.go:172] (0xc00229d1e0) Data frame received for 3 I0308 15:10:05.161723 7 log.go:172] (0xc0023f6c80) (3) Data frame handling I0308 15:10:05.163208 7 log.go:172] (0xc00229d1e0) Data frame received for 1 I0308 15:10:05.163239 7 log.go:172] (0xc0027b3220) (1) Data frame handling I0308 15:10:05.163261 7 log.go:172] (0xc0027b3220) (1) Data frame sent I0308 15:10:05.163279 7 log.go:172] (0xc00229d1e0) (0xc0027b3220) Stream removed, broadcasting: 1 I0308 15:10:05.163312 7 log.go:172] (0xc00229d1e0) Go away received I0308 15:10:05.163434 7 log.go:172] (0xc00229d1e0) (0xc0027b3220) Stream removed, broadcasting: 1 I0308 15:10:05.163460 7 log.go:172] (0xc00229d1e0) (0xc0023f6c80) Stream removed, broadcasting: 3 I0308 15:10:05.163474 7 log.go:172] (0xc00229d1e0) (0xc0022e4000) Stream removed, broadcasting: 5 Mar 8 15:10:05.163: INFO: Exec stderr: "" Mar 8 15:10:05.163: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.163: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.193916 7 log.go:172] (0xc00229d810) (0xc0027b3400) Create stream I0308 15:10:05.193939 7 log.go:172] (0xc00229d810) (0xc0027b3400) Stream added, broadcasting: 1 I0308 15:10:05.196000 7 log.go:172] (0xc00229d810) Reply frame received for 1 I0308 15:10:05.196026 7 log.go:172] (0xc00229d810) (0xc0027b34a0) Create stream I0308 15:10:05.196037 7 log.go:172] (0xc00229d810) (0xc0027b34a0) Stream added, broadcasting: 3 I0308 15:10:05.196817 7 log.go:172] (0xc00229d810) Reply frame received for 3 I0308 15:10:05.196842 7 log.go:172] (0xc00229d810) (0xc0023f6d20) Create stream I0308 15:10:05.196850 7 log.go:172] (0xc00229d810) (0xc0023f6d20) Stream added, broadcasting: 5 I0308 15:10:05.197665 7 log.go:172] (0xc00229d810) Reply frame received for 5 I0308 15:10:05.248297 7 log.go:172] (0xc00229d810) Data frame received for 3 I0308 15:10:05.248330 7 log.go:172] (0xc0027b34a0) (3) Data frame handling I0308 15:10:05.248339 7 log.go:172] (0xc0027b34a0) (3) Data frame sent I0308 15:10:05.248347 7 log.go:172] (0xc00229d810) Data frame received for 3 I0308 15:10:05.248364 7 log.go:172] (0xc0027b34a0) (3) Data frame handling I0308 15:10:05.248407 7 log.go:172] (0xc00229d810) Data frame received for 5 I0308 15:10:05.248453 7 log.go:172] (0xc0023f6d20) (5) Data frame handling I0308 15:10:05.249735 7 log.go:172] (0xc00229d810) Data frame received for 1 I0308 15:10:05.249754 7 log.go:172] (0xc0027b3400) (1) Data frame handling I0308 15:10:05.249773 7 log.go:172] (0xc0027b3400) (1) Data frame sent I0308 15:10:05.249875 7 log.go:172] (0xc00229d810) (0xc0027b3400) Stream removed, broadcasting: 1 I0308 15:10:05.249907 7 log.go:172] (0xc00229d810) Go away received I0308 15:10:05.249984 7 log.go:172] (0xc00229d810) (0xc0027b3400) Stream removed, broadcasting: 1 I0308 15:10:05.249996 7 log.go:172] (0xc00229d810) (0xc0027b34a0) Stream removed, broadcasting: 3 I0308 15:10:05.250005 7 log.go:172] (0xc00229d810) (0xc0023f6d20) Stream removed, broadcasting: 5 Mar 8 15:10:05.250: INFO: Exec stderr: "" Mar 8 15:10:05.250: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.250: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.278867 7 log.go:172] (0xc0027284d0) (0xc0029759a0) Create stream I0308 15:10:05.278890 7 log.go:172] (0xc0027284d0) (0xc0029759a0) Stream added, broadcasting: 1 I0308 15:10:05.280807 7 log.go:172] (0xc0027284d0) Reply frame received for 1 I0308 15:10:05.280827 7 log.go:172] (0xc0027284d0) (0xc0027b3540) Create stream I0308 15:10:05.280833 7 log.go:172] (0xc0027284d0) (0xc0027b3540) Stream added, broadcasting: 3 I0308 15:10:05.281399 7 log.go:172] (0xc0027284d0) Reply frame received for 3 I0308 15:10:05.281418 7 log.go:172] (0xc0027284d0) (0xc0023f6dc0) Create stream I0308 15:10:05.281425 7 log.go:172] (0xc0027284d0) (0xc0023f6dc0) Stream added, broadcasting: 5 I0308 15:10:05.281955 7 log.go:172] (0xc0027284d0) Reply frame received for 5 I0308 15:10:05.336351 7 log.go:172] (0xc0027284d0) Data frame received for 3 I0308 15:10:05.336408 7 log.go:172] (0xc0027b3540) (3) Data frame handling I0308 15:10:05.336427 7 log.go:172] (0xc0027b3540) (3) Data frame sent I0308 15:10:05.336440 7 log.go:172] (0xc0027284d0) Data frame received for 3 I0308 15:10:05.336453 7 log.go:172] (0xc0027b3540) (3) Data frame handling I0308 15:10:05.336469 7 log.go:172] (0xc0027284d0) Data frame received for 5 I0308 15:10:05.336481 7 log.go:172] (0xc0023f6dc0) (5) Data frame handling I0308 15:10:05.337300 7 log.go:172] (0xc0027284d0) Data frame received for 1 I0308 15:10:05.337321 7 log.go:172] (0xc0029759a0) (1) Data frame handling I0308 15:10:05.337339 7 log.go:172] (0xc0029759a0) (1) Data frame sent I0308 15:10:05.337350 7 log.go:172] (0xc0027284d0) (0xc0029759a0) Stream removed, broadcasting: 1 I0308 15:10:05.337363 7 log.go:172] (0xc0027284d0) Go away received I0308 15:10:05.337477 7 log.go:172] (0xc0027284d0) (0xc0029759a0) Stream removed, broadcasting: 1 I0308 15:10:05.337497 7 log.go:172] (0xc0027284d0) (0xc0027b3540) Stream removed, broadcasting: 3 I0308 15:10:05.337505 7 log.go:172] (0xc0027284d0) (0xc0023f6dc0) Stream removed, broadcasting: 5 Mar 8 15:10:05.337: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 8 15:10:05.337: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.337: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.362759 7 log.go:172] (0xc002728b00) (0xc002975b80) Create stream I0308 15:10:05.362778 7 log.go:172] (0xc002728b00) (0xc002975b80) Stream added, broadcasting: 1 I0308 15:10:05.369345 7 log.go:172] (0xc002728b00) Reply frame received for 1 I0308 15:10:05.369385 7 log.go:172] (0xc002728b00) (0xc0022e4000) Create stream I0308 15:10:05.369396 7 log.go:172] (0xc002728b00) (0xc0022e4000) Stream added, broadcasting: 3 I0308 15:10:05.370240 7 log.go:172] (0xc002728b00) Reply frame received for 3 I0308 15:10:05.370268 7 log.go:172] (0xc002728b00) (0xc0022e40a0) Create stream I0308 15:10:05.370277 7 log.go:172] (0xc002728b00) (0xc0022e40a0) Stream added, broadcasting: 5 I0308 15:10:05.371061 7 log.go:172] (0xc002728b00) Reply frame received for 5 I0308 15:10:05.440883 7 log.go:172] (0xc002728b00) Data frame received for 3 I0308 15:10:05.440913 7 log.go:172] (0xc0022e4000) (3) Data frame handling I0308 15:10:05.440922 7 log.go:172] (0xc0022e4000) (3) Data frame sent I0308 15:10:05.440928 7 log.go:172] (0xc002728b00) Data frame received for 3 I0308 15:10:05.440934 7 log.go:172] (0xc0022e4000) (3) Data frame handling I0308 15:10:05.440953 7 log.go:172] (0xc002728b00) Data frame received for 5 I0308 15:10:05.440961 7 log.go:172] (0xc0022e40a0) (5) Data frame handling I0308 15:10:05.442076 7 log.go:172] (0xc002728b00) Data frame received for 1 I0308 15:10:05.442100 7 log.go:172] (0xc002975b80) (1) Data frame handling I0308 15:10:05.442162 7 log.go:172] (0xc002975b80) (1) Data frame sent I0308 15:10:05.442181 7 log.go:172] (0xc002728b00) (0xc002975b80) Stream removed, broadcasting: 1 I0308 15:10:05.442204 7 log.go:172] (0xc002728b00) Go away received I0308 15:10:05.442264 7 log.go:172] (0xc002728b00) (0xc002975b80) Stream removed, broadcasting: 1 I0308 15:10:05.442278 7 log.go:172] (0xc002728b00) (0xc0022e4000) Stream removed, broadcasting: 3 I0308 15:10:05.442289 7 log.go:172] (0xc002728b00) (0xc0022e40a0) Stream removed, broadcasting: 5 Mar 8 15:10:05.442: INFO: Exec stderr: "" Mar 8 15:10:05.442: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.442: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.466155 7 log.go:172] (0xc002021130) (0xc002824140) Create stream I0308 15:10:05.466189 7 log.go:172] (0xc002021130) (0xc002824140) Stream added, broadcasting: 1 I0308 15:10:05.468171 7 log.go:172] (0xc002021130) Reply frame received for 1 I0308 15:10:05.468207 7 log.go:172] (0xc002021130) (0xc00288c140) Create stream I0308 15:10:05.468221 7 log.go:172] (0xc002021130) (0xc00288c140) Stream added, broadcasting: 3 I0308 15:10:05.469091 7 log.go:172] (0xc002021130) Reply frame received for 3 I0308 15:10:05.469122 7 log.go:172] (0xc002021130) (0xc0028241e0) Create stream I0308 15:10:05.469131 7 log.go:172] (0xc002021130) (0xc0028241e0) Stream added, broadcasting: 5 I0308 15:10:05.469783 7 log.go:172] (0xc002021130) Reply frame received for 5 I0308 15:10:05.528538 7 log.go:172] (0xc002021130) Data frame received for 5 I0308 15:10:05.528569 7 log.go:172] (0xc0028241e0) (5) Data frame handling I0308 15:10:05.528588 7 log.go:172] (0xc002021130) Data frame received for 3 I0308 15:10:05.528599 7 log.go:172] (0xc00288c140) (3) Data frame handling I0308 15:10:05.528607 7 log.go:172] (0xc00288c140) (3) Data frame sent I0308 15:10:05.528620 7 log.go:172] (0xc002021130) Data frame received for 3 I0308 15:10:05.528634 7 log.go:172] (0xc00288c140) (3) Data frame handling I0308 15:10:05.529814 7 log.go:172] (0xc002021130) Data frame received for 1 I0308 15:10:05.529829 7 log.go:172] (0xc002824140) (1) Data frame handling I0308 15:10:05.529841 7 log.go:172] (0xc002824140) (1) Data frame sent I0308 15:10:05.529917 7 log.go:172] (0xc002021130) (0xc002824140) Stream removed, broadcasting: 1 I0308 15:10:05.529956 7 log.go:172] (0xc002021130) Go away received I0308 15:10:05.529981 7 log.go:172] (0xc002021130) (0xc002824140) Stream removed, broadcasting: 1 I0308 15:10:05.529996 7 log.go:172] (0xc002021130) (0xc00288c140) Stream removed, broadcasting: 3 I0308 15:10:05.530004 7 log.go:172] (0xc002021130) (0xc0028241e0) Stream removed, broadcasting: 5 Mar 8 15:10:05.530: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 8 15:10:05.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.530: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.559862 7 log.go:172] (0xc002021810) (0xc0028243c0) Create stream I0308 15:10:05.559888 7 log.go:172] (0xc002021810) (0xc0028243c0) Stream added, broadcasting: 1 I0308 15:10:05.561745 7 log.go:172] (0xc002021810) Reply frame received for 1 I0308 15:10:05.561780 7 log.go:172] (0xc002021810) (0xc002904000) Create stream I0308 15:10:05.561795 7 log.go:172] (0xc002021810) (0xc002904000) Stream added, broadcasting: 3 I0308 15:10:05.562693 7 log.go:172] (0xc002021810) Reply frame received for 3 I0308 15:10:05.562735 7 log.go:172] (0xc002021810) (0xc00288c1e0) Create stream I0308 15:10:05.562745 7 log.go:172] (0xc002021810) (0xc00288c1e0) Stream added, broadcasting: 5 I0308 15:10:05.563602 7 log.go:172] (0xc002021810) Reply frame received for 5 I0308 15:10:05.616228 7 log.go:172] (0xc002021810) Data frame received for 5 I0308 15:10:05.616284 7 log.go:172] (0xc00288c1e0) (5) Data frame handling I0308 15:10:05.616301 7 log.go:172] (0xc002021810) Data frame received for 3 I0308 15:10:05.616305 7 log.go:172] (0xc002904000) (3) Data frame handling I0308 15:10:05.616312 7 log.go:172] (0xc002904000) (3) Data frame sent I0308 15:10:05.616317 7 log.go:172] (0xc002021810) Data frame received for 3 I0308 15:10:05.616320 7 log.go:172] (0xc002904000) (3) Data frame handling I0308 15:10:05.617982 7 log.go:172] (0xc002021810) Data frame received for 1 I0308 15:10:05.617995 7 log.go:172] (0xc0028243c0) (1) Data frame handling I0308 15:10:05.618009 7 log.go:172] (0xc0028243c0) (1) Data frame sent I0308 15:10:05.618022 7 log.go:172] (0xc002021810) (0xc0028243c0) Stream removed, broadcasting: 1 I0308 15:10:05.618039 7 log.go:172] (0xc002021810) Go away received I0308 15:10:05.618149 7 log.go:172] (0xc002021810) (0xc0028243c0) Stream removed, broadcasting: 1 I0308 15:10:05.618177 7 log.go:172] (0xc002021810) (0xc002904000) Stream removed, broadcasting: 3 I0308 15:10:05.618186 7 log.go:172] (0xc002021810) (0xc00288c1e0) Stream removed, broadcasting: 5 Mar 8 15:10:05.618: INFO: Exec stderr: "" Mar 8 15:10:05.618: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.618: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.642661 7 log.go:172] (0xc001fd22c0) (0xc0029043c0) Create stream I0308 15:10:05.642687 7 log.go:172] (0xc001fd22c0) (0xc0029043c0) Stream added, broadcasting: 1 I0308 15:10:05.644398 7 log.go:172] (0xc001fd22c0) Reply frame received for 1 I0308 15:10:05.644424 7 log.go:172] (0xc001fd22c0) (0xc002904460) Create stream I0308 15:10:05.644432 7 log.go:172] (0xc001fd22c0) (0xc002904460) Stream added, broadcasting: 3 I0308 15:10:05.645069 7 log.go:172] (0xc001fd22c0) Reply frame received for 3 I0308 15:10:05.645102 7 log.go:172] (0xc001fd22c0) (0xc00288c280) Create stream I0308 15:10:05.645112 7 log.go:172] (0xc001fd22c0) (0xc00288c280) Stream added, broadcasting: 5 I0308 15:10:05.645731 7 log.go:172] (0xc001fd22c0) Reply frame received for 5 I0308 15:10:05.709518 7 log.go:172] (0xc001fd22c0) Data frame received for 5 I0308 15:10:05.709557 7 log.go:172] (0xc00288c280) (5) Data frame handling I0308 15:10:05.709583 7 log.go:172] (0xc001fd22c0) Data frame received for 3 I0308 15:10:05.709591 7 log.go:172] (0xc002904460) (3) Data frame handling I0308 15:10:05.709599 7 log.go:172] (0xc002904460) (3) Data frame sent I0308 15:10:05.709612 7 log.go:172] (0xc001fd22c0) Data frame received for 3 I0308 15:10:05.709616 7 log.go:172] (0xc002904460) (3) Data frame handling I0308 15:10:05.710522 7 log.go:172] (0xc001fd22c0) Data frame received for 1 I0308 15:10:05.710534 7 log.go:172] (0xc0029043c0) (1) Data frame handling I0308 15:10:05.710541 7 log.go:172] (0xc0029043c0) (1) Data frame sent I0308 15:10:05.710555 7 log.go:172] (0xc001fd22c0) (0xc0029043c0) Stream removed, broadcasting: 1 I0308 15:10:05.710581 7 log.go:172] (0xc001fd22c0) Go away received I0308 15:10:05.710664 7 log.go:172] (0xc001fd22c0) (0xc0029043c0) Stream removed, broadcasting: 1 I0308 15:10:05.710683 7 log.go:172] (0xc001fd22c0) (0xc002904460) Stream removed, broadcasting: 3 I0308 15:10:05.710694 7 log.go:172] (0xc001fd22c0) (0xc00288c280) Stream removed, broadcasting: 5 Mar 8 15:10:05.710: INFO: Exec stderr: "" Mar 8 15:10:05.710: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.710: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.732749 7 log.go:172] (0xc001fd28f0) (0xc002904640) Create stream I0308 15:10:05.732776 7 log.go:172] (0xc001fd28f0) (0xc002904640) Stream added, broadcasting: 1 I0308 15:10:05.735399 7 log.go:172] (0xc001fd28f0) Reply frame received for 1 I0308 15:10:05.735433 7 log.go:172] (0xc001fd28f0) (0xc0029046e0) Create stream I0308 15:10:05.735445 7 log.go:172] (0xc001fd28f0) (0xc0029046e0) Stream added, broadcasting: 3 I0308 15:10:05.736584 7 log.go:172] (0xc001fd28f0) Reply frame received for 3 I0308 15:10:05.736608 7 log.go:172] (0xc001fd28f0) (0xc0022e41e0) Create stream I0308 15:10:05.736621 7 log.go:172] (0xc001fd28f0) (0xc0022e41e0) Stream added, broadcasting: 5 I0308 15:10:05.737577 7 log.go:172] (0xc001fd28f0) Reply frame received for 5 I0308 15:10:05.801586 7 log.go:172] (0xc001fd28f0) Data frame received for 5 I0308 15:10:05.801609 7 log.go:172] (0xc0022e41e0) (5) Data frame handling I0308 15:10:05.801626 7 log.go:172] (0xc001fd28f0) Data frame received for 3 I0308 15:10:05.801631 7 log.go:172] (0xc0029046e0) (3) Data frame handling I0308 15:10:05.801638 7 log.go:172] (0xc0029046e0) (3) Data frame sent I0308 15:10:05.801645 7 log.go:172] (0xc001fd28f0) Data frame received for 3 I0308 15:10:05.801650 7 log.go:172] (0xc0029046e0) (3) Data frame handling I0308 15:10:05.803097 7 log.go:172] (0xc001fd28f0) Data frame received for 1 I0308 15:10:05.803112 7 log.go:172] (0xc002904640) (1) Data frame handling I0308 15:10:05.803123 7 log.go:172] (0xc002904640) (1) Data frame sent I0308 15:10:05.803132 7 log.go:172] (0xc001fd28f0) (0xc002904640) Stream removed, broadcasting: 1 I0308 15:10:05.803213 7 log.go:172] (0xc001fd28f0) (0xc002904640) Stream removed, broadcasting: 1 I0308 15:10:05.803224 7 log.go:172] (0xc001fd28f0) (0xc0029046e0) Stream removed, broadcasting: 3 I0308 15:10:05.803293 7 log.go:172] (0xc001fd28f0) Go away received I0308 15:10:05.803427 7 log.go:172] (0xc001fd28f0) (0xc0022e41e0) Stream removed, broadcasting: 5 Mar 8 15:10:05.803: INFO: Exec stderr: "" Mar 8 15:10:05.803: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:10:05.803: INFO: >>> kubeConfig: /root/.kube/config I0308 15:10:05.826072 7 log.go:172] (0xc002a684d0) (0xc0022e4500) Create stream I0308 15:10:05.826093 7 log.go:172] (0xc002a684d0) (0xc0022e4500) Stream added, broadcasting: 1 I0308 15:10:05.828165 7 log.go:172] (0xc002a684d0) Reply frame received for 1 I0308 15:10:05.828203 7 log.go:172] (0xc002a684d0) (0xc002904780) Create stream I0308 15:10:05.828220 7 log.go:172] (0xc002a684d0) (0xc002904780) Stream added, broadcasting: 3 I0308 15:10:05.828936 7 log.go:172] (0xc002a684d0) Reply frame received for 3 I0308 15:10:05.828959 7 log.go:172] (0xc002a684d0) (0xc0023f60a0) Create stream I0308 15:10:05.828968 7 log.go:172] (0xc002a684d0) (0xc0023f60a0) Stream added, broadcasting: 5 I0308 15:10:05.829655 7 log.go:172] (0xc002a684d0) Reply frame received for 5 I0308 15:10:05.888735 7 log.go:172] (0xc002a684d0) Data frame received for 5 I0308 15:10:05.888757 7 log.go:172] (0xc0023f60a0) (5) Data frame handling I0308 15:10:05.888780 7 log.go:172] (0xc002a684d0) Data frame received for 3 I0308 15:10:05.888804 7 log.go:172] (0xc002904780) (3) Data frame handling I0308 15:10:05.888827 7 log.go:172] (0xc002904780) (3) Data frame sent I0308 15:10:05.888840 7 log.go:172] (0xc002a684d0) Data frame received for 3 I0308 15:10:05.888855 7 log.go:172] (0xc002904780) (3) Data frame handling I0308 15:10:05.889992 7 log.go:172] (0xc002a684d0) Data frame received for 1 I0308 15:10:05.890008 7 log.go:172] (0xc0022e4500) (1) Data frame handling I0308 15:10:05.890016 7 log.go:172] (0xc0022e4500) (1) Data frame sent I0308 15:10:05.890047 7 log.go:172] (0xc002a684d0) (0xc0022e4500) Stream removed, broadcasting: 1 I0308 15:10:05.890069 7 log.go:172] (0xc002a684d0) Go away received I0308 15:10:05.890147 7 log.go:172] (0xc002a684d0) (0xc0022e4500) Stream removed, broadcasting: 1 I0308 15:10:05.890166 7 log.go:172] (0xc002a684d0) (0xc002904780) Stream removed, broadcasting: 3 I0308 15:10:05.890179 7 log.go:172] (0xc002a684d0) (0xc0023f60a0) Stream removed, broadcasting: 5 Mar 8 15:10:05.890: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:10:05.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6301" for this suite. • [SLOW TEST:7.154 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:10:05.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 8 15:10:08.572: INFO: Successfully updated pod "annotationupdatee84acee2-1726-441d-8652-2d0d1ad805bd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:10:10.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6621" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:10:10.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 15:10:10.694: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 15:10:10.705: INFO: Waiting for terminating namespaces to be deleted... Mar 8 15:10:10.712: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 15:10:10.717: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.717: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:10:10.718: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.718: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:10:10.718: INFO: test-pod from e2e-kubelet-etc-hosts-6301 started at 2020-03-08 15:09:58 +0000 UTC (3 container statuses recorded) Mar 8 15:10:10.718: INFO: Container busybox-1 ready: true, restart count 0 Mar 8 15:10:10.718: INFO: Container busybox-2 ready: true, restart count 0 Mar 8 15:10:10.718: INFO: Container busybox-3 ready: true, restart count 0 Mar 8 15:10:10.718: INFO: annotationupdatee84acee2-1726-441d-8652-2d0d1ad805bd from downward-api-6621 started at 2020-03-08 15:10:06 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.718: INFO: Container client-container ready: true, restart count 0 Mar 8 15:10:10.718: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 15:10:10.734: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.734: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:10:10.734: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.734: INFO: Container coredns ready: true, restart count 0 Mar 8 15:10:10.734: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-6301 started at 2020-03-08 15:10:02 +0000 UTC (2 container statuses recorded) Mar 8 15:10:10.734: INFO: Container busybox-1 ready: true, restart count 0 Mar 8 15:10:10.734: INFO: Container busybox-2 ready: true, restart count 0 Mar 8 15:10:10.734: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:10:10.734: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 8 15:10:10.832: INFO: Pod annotationupdatee84acee2-1726-441d-8652-2d0d1ad805bd requesting resource cpu=0m on Node latest-worker Mar 8 15:10:10.832: INFO: Pod test-host-network-pod requesting resource cpu=0m on Node latest-worker2 Mar 8 15:10:10.832: INFO: Pod test-pod requesting resource cpu=0m on Node latest-worker Mar 8 15:10:10.832: INFO: Pod coredns-6955765f44-cgshp requesting resource cpu=100m on Node latest-worker2 Mar 8 15:10:10.832: INFO: Pod kindnet-2j5xm requesting resource cpu=100m on Node latest-worker Mar 8 15:10:10.832: INFO: Pod kindnet-spz5f requesting resource cpu=100m on Node latest-worker2 Mar 8 15:10:10.832: INFO: Pod kube-proxy-9jc24 requesting resource cpu=0m on Node latest-worker Mar 8 15:10:10.832: INFO: Pod kube-proxy-cx5xz requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 8 15:10:10.832: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 8 15:10:10.838: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840.15fa5cc2d5dadf65], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6525/filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840.15fa5cc30b4309e8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840.15fa5cc31cb577b2], Reason = [Created], Message = [Created container filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840] STEP: Considering event: Type = [Normal], Name = [filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840.15fa5cc3293f84a2], Reason = [Started], Message = [Started container filler-pod-5db4717e-3cf7-48ef-92d0-a2df52f4a840] STEP: Considering event: Type = [Normal], Name = [filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2.15fa5cc2d57f5874], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6525/filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2.15fa5cc310134379], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2.15fa5cc32045860a], Reason = [Created], Message = [Created container filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2] STEP: Considering event: Type = [Normal], Name = [filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2.15fa5cc32de7fc2c], Reason = [Started], Message = [Started container filler-pod-898e5cb8-72ee-4cb6-9aa2-d8965a70a3b2] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa5cc3c5639f78], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:10:15.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6525" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:5.397 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":8,"skipped":150,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:10:16.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-f0e13ca6-0ba6-495e-8202-a2696f75cdc7 STEP: Creating secret with name secret-projected-all-test-volume-6df080c5-f340-4735-b0a4-73f38f8b4217 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 8 15:10:16.119: INFO: Waiting up to 5m0s for pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a" in namespace "projected-9170" to be "success or failure" Mar 8 15:10:16.157: INFO: Pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.640682ms Mar 8 15:10:18.161: INFO: Pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042173809s Mar 8 15:10:20.164: INFO: Pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045151503s Mar 8 15:10:22.166: INFO: Pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04764233s STEP: Saw pod success Mar 8 15:10:22.166: INFO: Pod "projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a" satisfied condition "success or failure" Mar 8 15:10:22.167: INFO: Trying to get logs from node latest-worker2 pod projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a container projected-all-volume-test: STEP: delete the pod Mar 8 15:10:22.187: INFO: Waiting for pod projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a to disappear Mar 8 15:10:22.202: INFO: Pod projected-volume-497cdb97-0070-4182-b0c5-8c9d9615189a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:10:22.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9170" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":9,"skipped":151,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:10:22.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577 Mar 8 15:10:22.295: INFO: Pod name my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577: Found 0 pods out of 1 Mar 8 15:10:27.297: INFO: Pod name my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577: Found 1 pods out of 1 Mar 8 15:10:27.297: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577" are running Mar 8 15:10:27.303: INFO: Pod "my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577-c5bgk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:10:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:10:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:10:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:10:22 +0000 UTC Reason: Message:}]) Mar 8 15:10:27.303: INFO: Trying to dial the pod Mar 8 15:10:32.313: INFO: Controller my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577: Got expected result from replica 1 [my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577-c5bgk]: "my-hostname-basic-75def4ed-3dd8-4a6d-af5c-349250f6a577-c5bgk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:10:32.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7714" for this suite. • [SLOW TEST:10.114 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":10,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:10:32.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 8 15:10:32.405: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6601 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:10:32.405: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6601 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 8 15:10:42.412: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6646 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:10:42.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6646 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 8 15:10:52.421: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6683 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:10:52.421: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6683 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 8 15:11:02.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6713 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:11:02.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-a 3942748e-c024-4e0b-9e4c-7daaee31251a 6713 0 2020-03-08 15:10:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 8 15:11:12.433: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-b 64c46018-556a-4e2d-9424-f80aaff0ea4f 6743 0 2020-03-08 15:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:11:12.433: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-b 64c46018-556a-4e2d-9424-f80aaff0ea4f 6743 0 2020-03-08 15:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 8 15:11:22.437: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-b 64c46018-556a-4e2d-9424-f80aaff0ea4f 6771 0 2020-03-08 15:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 15:11:22.437: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9318 /api/v1/namespaces/watch-9318/configmaps/e2e-watch-test-configmap-b 64c46018-556a-4e2d-9424-f80aaff0ea4f 6771 0 2020-03-08 15:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:11:32.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9318" for this suite. • [SLOW TEST:60.125 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":11,"skipped":183,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:11:32.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 15:11:35.540: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:11:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6418" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":12,"skipped":183,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:11:35.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:11:35.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203" in namespace "projected-9912" to be "success or failure" Mar 8 15:11:35.654: INFO: Pod "downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203": Phase="Pending", Reason="", readiness=false. Elapsed: 15.782191ms Mar 8 15:11:37.657: INFO: Pod "downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018387695s STEP: Saw pod success Mar 8 15:11:37.657: INFO: Pod "downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203" satisfied condition "success or failure" Mar 8 15:11:37.659: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203 container client-container: STEP: delete the pod Mar 8 15:11:37.725: INFO: Waiting for pod downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203 to disappear Mar 8 15:11:37.730: INFO: Pod downwardapi-volume-54e3cb81-2917-47a0-9cf9-7cc03a81e203 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:11:37.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9912" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:11:37.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Mar 8 15:11:37.780: INFO: Waiting up to 5m0s for pod "client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab" in namespace "containers-4831" to be "success or failure" Mar 8 15:11:37.785: INFO: Pod "client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.85747ms Mar 8 15:11:39.799: INFO: Pod "client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019697143s STEP: Saw pod success Mar 8 15:11:39.799: INFO: Pod "client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab" satisfied condition "success or failure" Mar 8 15:11:39.803: INFO: Trying to get logs from node latest-worker pod client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab container test-container: STEP: delete the pod Mar 8 15:11:39.821: INFO: Waiting for pod client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab to disappear Mar 8 15:11:39.826: INFO: Pod client-containers-749c7170-4179-4dd5-a1c2-b7e89e2dbaab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:11:39.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4831" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":14,"skipped":225,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:11:39.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:11:39.882: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7946 I0308 15:11:39.897971 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7946, replica count: 1 I0308 15:11:40.948401 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 15:11:41.948638 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 15:11:42.066: INFO: Created: latency-svc-h4xp8 Mar 8 15:11:42.079: INFO: Got endpoints: latency-svc-h4xp8 [31.097127ms] Mar 8 15:11:42.129: INFO: Created: latency-svc-wwdlx Mar 8 15:11:42.155: INFO: Created: latency-svc-dgrkz Mar 8 15:11:42.156: INFO: Got endpoints: latency-svc-wwdlx [75.976259ms] Mar 8 15:11:42.157: INFO: Got endpoints: latency-svc-dgrkz [77.790178ms] Mar 8 15:11:42.179: INFO: Created: latency-svc-2stqn Mar 8 15:11:42.204: INFO: Created: latency-svc-mmmwr Mar 8 15:11:42.204: INFO: Got endpoints: latency-svc-2stqn [123.97364ms] Mar 8 15:11:42.248: INFO: Got endpoints: latency-svc-mmmwr [168.42705ms] Mar 8 15:11:42.276: INFO: Created: latency-svc-r6ztm Mar 8 15:11:42.306: INFO: Got endpoints: latency-svc-r6ztm [226.099565ms] Mar 8 15:11:42.348: INFO: Created: latency-svc-trhth Mar 8 15:11:42.399: INFO: Got endpoints: latency-svc-trhth [319.711831ms] Mar 8 15:11:42.457: INFO: Created: latency-svc-8lfcv Mar 8 15:11:42.548: INFO: Got endpoints: latency-svc-8lfcv [468.175493ms] Mar 8 15:11:42.568: INFO: Created: latency-svc-mmztz Mar 8 15:11:42.581: INFO: Got endpoints: latency-svc-mmztz [500.98463ms] Mar 8 15:11:42.607: INFO: Created: latency-svc-8qh89 Mar 8 15:11:42.617: INFO: Got endpoints: latency-svc-8qh89 [536.670646ms] Mar 8 15:11:42.636: INFO: Created: latency-svc-v6r47 Mar 8 15:11:42.640: INFO: Got endpoints: latency-svc-v6r47 [560.659175ms] Mar 8 15:11:42.675: INFO: Created: latency-svc-zqz75 Mar 8 15:11:42.715: INFO: Got endpoints: latency-svc-zqz75 [635.373639ms] Mar 8 15:11:42.716: INFO: Created: latency-svc-hqm9m Mar 8 15:11:42.745: INFO: Got endpoints: latency-svc-hqm9m [665.268127ms] Mar 8 15:11:42.802: INFO: Created: latency-svc-fqhbr Mar 8 15:11:42.841: INFO: Got endpoints: latency-svc-fqhbr [761.476892ms] Mar 8 15:11:42.842: INFO: Created: latency-svc-w44wz Mar 8 15:11:42.878: INFO: Got endpoints: latency-svc-w44wz [798.659682ms] Mar 8 15:11:43.700: INFO: Created: latency-svc-blstg Mar 8 15:11:43.795: INFO: Got endpoints: latency-svc-blstg [1.714849771s] Mar 8 15:11:43.923: INFO: Created: latency-svc-zm8ql Mar 8 15:11:43.941: INFO: Got endpoints: latency-svc-zm8ql [1.785698717s] Mar 8 15:11:43.959: INFO: Created: latency-svc-kdfhr Mar 8 15:11:43.978: INFO: Got endpoints: latency-svc-kdfhr [1.820427664s] Mar 8 15:11:44.045: INFO: Created: latency-svc-mpqzf Mar 8 15:11:44.068: INFO: Got endpoints: latency-svc-mpqzf [1.86387235s] Mar 8 15:11:44.069: INFO: Created: latency-svc-zkwjn Mar 8 15:11:44.073: INFO: Got endpoints: latency-svc-zkwjn [1.825005997s] Mar 8 15:11:44.121: INFO: Created: latency-svc-pfl55 Mar 8 15:11:44.139: INFO: Got endpoints: latency-svc-pfl55 [1.832937011s] Mar 8 15:11:44.195: INFO: Created: latency-svc-t8qh4 Mar 8 15:11:44.247: INFO: Got endpoints: latency-svc-t8qh4 [1.847970976s] Mar 8 15:11:44.249: INFO: Created: latency-svc-flvh4 Mar 8 15:11:44.258: INFO: Got endpoints: latency-svc-flvh4 [1.710050257s] Mar 8 15:11:44.279: INFO: Created: latency-svc-gbtd6 Mar 8 15:11:44.282: INFO: Got endpoints: latency-svc-gbtd6 [1.700703627s] Mar 8 15:11:44.321: INFO: Created: latency-svc-rzjbg Mar 8 15:11:44.356: INFO: Got endpoints: latency-svc-rzjbg [1.73903098s] Mar 8 15:11:44.356: INFO: Created: latency-svc-ngcw2 Mar 8 15:11:44.359: INFO: Got endpoints: latency-svc-ngcw2 [1.718926851s] Mar 8 15:11:44.380: INFO: Created: latency-svc-vb759 Mar 8 15:11:44.384: INFO: Got endpoints: latency-svc-vb759 [1.668384547s] Mar 8 15:11:44.415: INFO: Created: latency-svc-46kxs Mar 8 15:11:44.441: INFO: Got endpoints: latency-svc-46kxs [1.695438964s] Mar 8 15:11:44.457: INFO: Created: latency-svc-hlwxg Mar 8 15:11:44.467: INFO: Got endpoints: latency-svc-hlwxg [1.626039558s] Mar 8 15:11:44.494: INFO: Created: latency-svc-rzlbh Mar 8 15:11:44.498: INFO: Got endpoints: latency-svc-rzlbh [1.619384806s] Mar 8 15:11:44.518: INFO: Created: latency-svc-s7fmn Mar 8 15:11:44.528: INFO: Got endpoints: latency-svc-s7fmn [733.681442ms] Mar 8 15:11:44.578: INFO: Created: latency-svc-zbn6j Mar 8 15:11:44.594: INFO: Got endpoints: latency-svc-zbn6j [652.406815ms] Mar 8 15:11:44.655: INFO: Created: latency-svc-l2pr9 Mar 8 15:11:44.660: INFO: Got endpoints: latency-svc-l2pr9 [682.040156ms] Mar 8 15:11:44.712: INFO: Created: latency-svc-gqxfv Mar 8 15:11:44.720: INFO: Got endpoints: latency-svc-gqxfv [651.98283ms] Mar 8 15:11:44.747: INFO: Created: latency-svc-j2npl Mar 8 15:11:44.761: INFO: Got endpoints: latency-svc-j2npl [687.822402ms] Mar 8 15:11:44.806: INFO: Created: latency-svc-c7lg8 Mar 8 15:11:44.809: INFO: Got endpoints: latency-svc-c7lg8 [670.080521ms] Mar 8 15:11:44.842: INFO: Created: latency-svc-spl5z Mar 8 15:11:44.851: INFO: Got endpoints: latency-svc-spl5z [603.406309ms] Mar 8 15:11:44.903: INFO: Created: latency-svc-knhtc Mar 8 15:11:44.911: INFO: Got endpoints: latency-svc-knhtc [652.573282ms] Mar 8 15:11:44.987: INFO: Created: latency-svc-vqqkr Mar 8 15:11:45.011: INFO: Got endpoints: latency-svc-vqqkr [729.443629ms] Mar 8 15:11:45.111: INFO: Created: latency-svc-jz9t6 Mar 8 15:11:45.155: INFO: Got endpoints: latency-svc-jz9t6 [799.722872ms] Mar 8 15:11:45.156: INFO: Created: latency-svc-7prqz Mar 8 15:11:45.181: INFO: Got endpoints: latency-svc-7prqz [821.27826ms] Mar 8 15:11:45.347: INFO: Created: latency-svc-g7884 Mar 8 15:11:45.361: INFO: Got endpoints: latency-svc-g7884 [977.031761ms] Mar 8 15:11:45.403: INFO: Created: latency-svc-2gr4p Mar 8 15:11:45.408: INFO: Got endpoints: latency-svc-2gr4p [967.775972ms] Mar 8 15:11:45.513: INFO: Created: latency-svc-x6cvv Mar 8 15:11:45.553: INFO: Created: latency-svc-gb2zh Mar 8 15:11:45.553: INFO: Got endpoints: latency-svc-x6cvv [1.085363798s] Mar 8 15:11:45.565: INFO: Got endpoints: latency-svc-gb2zh [1.066998495s] Mar 8 15:11:45.669: INFO: Created: latency-svc-t686t Mar 8 15:11:45.673: INFO: Got endpoints: latency-svc-t686t [1.144802605s] Mar 8 15:11:45.734: INFO: Created: latency-svc-v8xfb Mar 8 15:11:45.751: INFO: Got endpoints: latency-svc-v8xfb [1.157424578s] Mar 8 15:11:45.812: INFO: Created: latency-svc-njvmg Mar 8 15:11:45.828: INFO: Got endpoints: latency-svc-njvmg [1.168182152s] Mar 8 15:11:45.874: INFO: Created: latency-svc-pbn8p Mar 8 15:11:45.893: INFO: Got endpoints: latency-svc-pbn8p [1.173493574s] Mar 8 15:11:45.955: INFO: Created: latency-svc-4nktj Mar 8 15:11:45.960: INFO: Got endpoints: latency-svc-4nktj [1.198732256s] Mar 8 15:11:46.011: INFO: Created: latency-svc-46lhl Mar 8 15:11:46.019: INFO: Got endpoints: latency-svc-46lhl [1.210568185s] Mar 8 15:11:46.117: INFO: Created: latency-svc-2rzhz Mar 8 15:11:46.121: INFO: Got endpoints: latency-svc-2rzhz [1.269856386s] Mar 8 15:11:46.156: INFO: Created: latency-svc-ncf5z Mar 8 15:11:46.194: INFO: Got endpoints: latency-svc-ncf5z [1.282981445s] Mar 8 15:11:46.279: INFO: Created: latency-svc-rtkms Mar 8 15:11:46.336: INFO: Created: latency-svc-5b5hl Mar 8 15:11:46.337: INFO: Got endpoints: latency-svc-rtkms [1.325481291s] Mar 8 15:11:46.367: INFO: Got endpoints: latency-svc-5b5hl [1.211261143s] Mar 8 15:11:46.428: INFO: Created: latency-svc-xj865 Mar 8 15:11:46.483: INFO: Created: latency-svc-pc624 Mar 8 15:11:46.483: INFO: Got endpoints: latency-svc-xj865 [1.302244178s] Mar 8 15:11:46.511: INFO: Got endpoints: latency-svc-pc624 [1.150305108s] Mar 8 15:11:46.578: INFO: Created: latency-svc-8tbkk Mar 8 15:11:46.601: INFO: Got endpoints: latency-svc-8tbkk [1.192332274s] Mar 8 15:11:46.602: INFO: Created: latency-svc-nzv4j Mar 8 15:11:46.619: INFO: Got endpoints: latency-svc-nzv4j [1.066708905s] Mar 8 15:11:46.728: INFO: Created: latency-svc-krwld Mar 8 15:11:46.807: INFO: Got endpoints: latency-svc-krwld [1.241723945s] Mar 8 15:11:46.807: INFO: Created: latency-svc-hfqqr Mar 8 15:11:46.816: INFO: Got endpoints: latency-svc-hfqqr [1.143070089s] Mar 8 15:11:46.898: INFO: Created: latency-svc-s4dq6 Mar 8 15:11:46.906: INFO: Got endpoints: latency-svc-s4dq6 [1.154768505s] Mar 8 15:11:46.957: INFO: Created: latency-svc-wpwk7 Mar 8 15:11:46.989: INFO: Got endpoints: latency-svc-wpwk7 [1.160980343s] Mar 8 15:11:47.095: INFO: Created: latency-svc-lxrpf Mar 8 15:11:47.189: INFO: Got endpoints: latency-svc-lxrpf [1.295936792s] Mar 8 15:11:47.194: INFO: Created: latency-svc-sk6cn Mar 8 15:11:47.363: INFO: Got endpoints: latency-svc-sk6cn [1.402363166s] Mar 8 15:11:47.365: INFO: Created: latency-svc-ls5d2 Mar 8 15:11:47.385: INFO: Got endpoints: latency-svc-ls5d2 [1.365812936s] Mar 8 15:11:47.600: INFO: Created: latency-svc-7xlk5 Mar 8 15:11:47.605: INFO: Got endpoints: latency-svc-7xlk5 [1.484425887s] Mar 8 15:11:47.818: INFO: Created: latency-svc-2zgfm Mar 8 15:11:47.868: INFO: Got endpoints: latency-svc-2zgfm [1.67369201s] Mar 8 15:11:47.868: INFO: Created: latency-svc-bc7wz Mar 8 15:11:47.906: INFO: Got endpoints: latency-svc-bc7wz [1.56934031s] Mar 8 15:11:47.962: INFO: Created: latency-svc-8hbpd Mar 8 15:11:47.975: INFO: Got endpoints: latency-svc-8hbpd [1.607853631s] Mar 8 15:11:48.055: INFO: Created: latency-svc-w9skb Mar 8 15:11:48.093: INFO: Got endpoints: latency-svc-w9skb [1.609923652s] Mar 8 15:11:48.121: INFO: Created: latency-svc-wpps4 Mar 8 15:11:48.146: INFO: Got endpoints: latency-svc-wpps4 [1.634778608s] Mar 8 15:11:48.236: INFO: Created: latency-svc-m2m47 Mar 8 15:11:48.278: INFO: Got endpoints: latency-svc-m2m47 [1.676677019s] Mar 8 15:11:48.278: INFO: Created: latency-svc-8zjd7 Mar 8 15:11:48.543: INFO: Got endpoints: latency-svc-8zjd7 [1.92321734s] Mar 8 15:11:48.550: INFO: Created: latency-svc-lwct7 Mar 8 15:11:48.589: INFO: Got endpoints: latency-svc-lwct7 [1.782707222s] Mar 8 15:11:48.639: INFO: Created: latency-svc-zlgf6 Mar 8 15:11:48.704: INFO: Got endpoints: latency-svc-zlgf6 [1.887314267s] Mar 8 15:11:48.893: INFO: Created: latency-svc-j6drx Mar 8 15:11:48.923: INFO: Got endpoints: latency-svc-j6drx [2.017121184s] Mar 8 15:11:49.021: INFO: Created: latency-svc-pxlgw Mar 8 15:11:49.063: INFO: Created: latency-svc-plml8 Mar 8 15:11:49.063: INFO: Got endpoints: latency-svc-pxlgw [2.073997234s] Mar 8 15:11:49.074: INFO: Got endpoints: latency-svc-plml8 [1.884284398s] Mar 8 15:11:49.171: INFO: Created: latency-svc-hd5km Mar 8 15:11:49.207: INFO: Got endpoints: latency-svc-hd5km [1.844642716s] Mar 8 15:11:49.208: INFO: Created: latency-svc-jp5kb Mar 8 15:11:49.261: INFO: Got endpoints: latency-svc-jp5kb [1.875713594s] Mar 8 15:11:49.358: INFO: Created: latency-svc-cf2ll Mar 8 15:11:49.394: INFO: Got endpoints: latency-svc-cf2ll [1.788218016s] Mar 8 15:11:49.394: INFO: Created: latency-svc-rpjnf Mar 8 15:11:49.409: INFO: Got endpoints: latency-svc-rpjnf [1.541122866s] Mar 8 15:11:49.494: INFO: Created: latency-svc-5sswb Mar 8 15:11:49.521: INFO: Created: latency-svc-ktqzb Mar 8 15:11:49.521: INFO: Got endpoints: latency-svc-5sswb [1.614843506s] Mar 8 15:11:49.529: INFO: Got endpoints: latency-svc-ktqzb [1.554794117s] Mar 8 15:11:49.550: INFO: Created: latency-svc-p6b8b Mar 8 15:11:49.559: INFO: Got endpoints: latency-svc-p6b8b [1.466151946s] Mar 8 15:11:49.580: INFO: Created: latency-svc-frlph Mar 8 15:11:49.591: INFO: Got endpoints: latency-svc-frlph [1.445023454s] Mar 8 15:11:49.631: INFO: Created: latency-svc-rdnx8 Mar 8 15:11:49.653: INFO: Created: latency-svc-wtqcw Mar 8 15:11:49.653: INFO: Got endpoints: latency-svc-rdnx8 [1.375802563s] Mar 8 15:11:49.661: INFO: Got endpoints: latency-svc-wtqcw [1.118199362s] Mar 8 15:11:49.695: INFO: Created: latency-svc-7m2n6 Mar 8 15:11:49.702: INFO: Got endpoints: latency-svc-7m2n6 [1.112615257s] Mar 8 15:11:49.781: INFO: Created: latency-svc-8nfdw Mar 8 15:11:49.816: INFO: Created: latency-svc-t4lgr Mar 8 15:11:49.816: INFO: Got endpoints: latency-svc-8nfdw [1.11264451s] Mar 8 15:11:49.828: INFO: Got endpoints: latency-svc-t4lgr [905.211746ms] Mar 8 15:11:49.866: INFO: Created: latency-svc-pnm2p Mar 8 15:11:49.871: INFO: Got endpoints: latency-svc-pnm2p [807.557992ms] Mar 8 15:11:49.925: INFO: Created: latency-svc-wd658 Mar 8 15:11:49.944: INFO: Got endpoints: latency-svc-wd658 [870.522031ms] Mar 8 15:11:49.945: INFO: Created: latency-svc-dqblp Mar 8 15:11:49.962: INFO: Got endpoints: latency-svc-dqblp [754.883983ms] Mar 8 15:11:49.983: INFO: Created: latency-svc-ntl4c Mar 8 15:11:49.990: INFO: Got endpoints: latency-svc-ntl4c [729.192432ms] Mar 8 15:11:50.010: INFO: Created: latency-svc-mv7n2 Mar 8 15:11:50.099: INFO: Got endpoints: latency-svc-mv7n2 [705.464318ms] Mar 8 15:11:50.119: INFO: Created: latency-svc-bxv6x Mar 8 15:11:50.150: INFO: Created: latency-svc-v599w Mar 8 15:11:50.150: INFO: Got endpoints: latency-svc-bxv6x [741.103912ms] Mar 8 15:11:50.158: INFO: Got endpoints: latency-svc-v599w [637.360418ms] Mar 8 15:11:50.179: INFO: Created: latency-svc-h8hcm Mar 8 15:11:50.188: INFO: Got endpoints: latency-svc-h8hcm [658.823087ms] Mar 8 15:11:50.267: INFO: Created: latency-svc-f9964 Mar 8 15:11:50.289: INFO: Created: latency-svc-qzn8z Mar 8 15:11:50.290: INFO: Got endpoints: latency-svc-f9964 [730.858152ms] Mar 8 15:11:50.296: INFO: Got endpoints: latency-svc-qzn8z [705.256054ms] Mar 8 15:11:50.326: INFO: Created: latency-svc-w48c4 Mar 8 15:11:50.332: INFO: Got endpoints: latency-svc-w48c4 [678.419004ms] Mar 8 15:11:50.357: INFO: Created: latency-svc-rtgt7 Mar 8 15:11:50.361: INFO: Got endpoints: latency-svc-rtgt7 [700.175748ms] Mar 8 15:11:50.416: INFO: Created: latency-svc-795ch Mar 8 15:11:50.435: INFO: Got endpoints: latency-svc-795ch [732.90525ms] Mar 8 15:11:50.435: INFO: Created: latency-svc-mtlkz Mar 8 15:11:50.439: INFO: Got endpoints: latency-svc-mtlkz [622.882678ms] Mar 8 15:11:50.459: INFO: Created: latency-svc-8mtnc Mar 8 15:11:50.463: INFO: Got endpoints: latency-svc-8mtnc [634.957577ms] Mar 8 15:11:50.489: INFO: Created: latency-svc-q6xv8 Mar 8 15:11:50.505: INFO: Got endpoints: latency-svc-q6xv8 [634.459861ms] Mar 8 15:11:50.590: INFO: Created: latency-svc-jgt6s Mar 8 15:11:50.629: INFO: Got endpoints: latency-svc-jgt6s [684.372568ms] Mar 8 15:11:50.629: INFO: Created: latency-svc-rwdh2 Mar 8 15:11:50.648: INFO: Got endpoints: latency-svc-rwdh2 [686.11115ms] Mar 8 15:11:50.758: INFO: Created: latency-svc-fzwt6 Mar 8 15:11:50.792: INFO: Got endpoints: latency-svc-fzwt6 [801.356454ms] Mar 8 15:11:50.792: INFO: Created: latency-svc-qtdm6 Mar 8 15:11:50.799: INFO: Got endpoints: latency-svc-qtdm6 [700.094162ms] Mar 8 15:11:50.817: INFO: Created: latency-svc-mp9l9 Mar 8 15:11:50.823: INFO: Got endpoints: latency-svc-mp9l9 [673.343791ms] Mar 8 15:11:50.854: INFO: Created: latency-svc-m7xrc Mar 8 15:11:50.901: INFO: Got endpoints: latency-svc-m7xrc [742.857477ms] Mar 8 15:11:50.902: INFO: Created: latency-svc-fcg9r Mar 8 15:11:50.907: INFO: Got endpoints: latency-svc-fcg9r [718.621141ms] Mar 8 15:11:50.927: INFO: Created: latency-svc-6mj4t Mar 8 15:11:50.937: INFO: Got endpoints: latency-svc-6mj4t [647.446902ms] Mar 8 15:11:50.958: INFO: Created: latency-svc-srjl7 Mar 8 15:11:50.967: INFO: Got endpoints: latency-svc-srjl7 [670.923173ms] Mar 8 15:11:50.989: INFO: Created: latency-svc-wvpst Mar 8 15:11:50.996: INFO: Got endpoints: latency-svc-wvpst [664.617689ms] Mar 8 15:11:51.039: INFO: Created: latency-svc-28clt Mar 8 15:11:51.044: INFO: Got endpoints: latency-svc-28clt [683.295611ms] Mar 8 15:11:51.063: INFO: Created: latency-svc-rhckj Mar 8 15:11:51.068: INFO: Got endpoints: latency-svc-rhckj [633.239166ms] Mar 8 15:11:51.088: INFO: Created: latency-svc-qlzqx Mar 8 15:11:51.113: INFO: Got endpoints: latency-svc-qlzqx [673.794465ms] Mar 8 15:11:51.225: INFO: Created: latency-svc-t9tll Mar 8 15:11:51.269: INFO: Got endpoints: latency-svc-t9tll [805.068917ms] Mar 8 15:11:51.269: INFO: Created: latency-svc-7xp62 Mar 8 15:11:51.272: INFO: Got endpoints: latency-svc-7xp62 [766.790984ms] Mar 8 15:11:51.312: INFO: Created: latency-svc-8jq5v Mar 8 15:11:51.374: INFO: Got endpoints: latency-svc-8jq5v [745.377669ms] Mar 8 15:11:51.402: INFO: Created: latency-svc-t8trm Mar 8 15:11:51.410: INFO: Got endpoints: latency-svc-t8trm [762.035152ms] Mar 8 15:11:51.450: INFO: Created: latency-svc-4d8wm Mar 8 15:11:51.459: INFO: Got endpoints: latency-svc-4d8wm [666.938725ms] Mar 8 15:11:51.536: INFO: Created: latency-svc-lt7dv Mar 8 15:11:51.585: INFO: Created: latency-svc-vhh5h Mar 8 15:11:51.585: INFO: Got endpoints: latency-svc-lt7dv [786.046142ms] Mar 8 15:11:51.603: INFO: Got endpoints: latency-svc-vhh5h [779.455854ms] Mar 8 15:11:51.634: INFO: Created: latency-svc-f56m8 Mar 8 15:11:51.686: INFO: Got endpoints: latency-svc-f56m8 [785.037396ms] Mar 8 15:11:51.687: INFO: Created: latency-svc-27k6h Mar 8 15:11:51.692: INFO: Got endpoints: latency-svc-27k6h [785.026805ms] Mar 8 15:11:51.714: INFO: Created: latency-svc-gjn7g Mar 8 15:11:51.722: INFO: Got endpoints: latency-svc-gjn7g [784.013651ms] Mar 8 15:11:51.756: INFO: Created: latency-svc-sj5vk Mar 8 15:11:51.765: INFO: Got endpoints: latency-svc-sj5vk [797.507437ms] Mar 8 15:11:51.829: INFO: Created: latency-svc-q7lxv Mar 8 15:11:51.859: INFO: Got endpoints: latency-svc-q7lxv [862.573577ms] Mar 8 15:11:51.862: INFO: Created: latency-svc-sd95g Mar 8 15:11:51.878: INFO: Created: latency-svc-pn277 Mar 8 15:11:51.879: INFO: Got endpoints: latency-svc-sd95g [834.127991ms] Mar 8 15:11:51.884: INFO: Got endpoints: latency-svc-pn277 [816.159671ms] Mar 8 15:11:51.914: INFO: Created: latency-svc-6mrbr Mar 8 15:11:52.039: INFO: Got endpoints: latency-svc-6mrbr [925.676713ms] Mar 8 15:11:52.039: INFO: Created: latency-svc-99c7h Mar 8 15:11:52.076: INFO: Got endpoints: latency-svc-99c7h [807.222057ms] Mar 8 15:11:52.076: INFO: Created: latency-svc-bwzlx Mar 8 15:11:52.094: INFO: Got endpoints: latency-svc-bwzlx [821.532119ms] Mar 8 15:11:52.129: INFO: Created: latency-svc-z8kfm Mar 8 15:11:52.219: INFO: Got endpoints: latency-svc-z8kfm [180.261758ms] Mar 8 15:11:52.221: INFO: Created: latency-svc-4wdwz Mar 8 15:11:52.225: INFO: Got endpoints: latency-svc-4wdwz [850.958434ms] Mar 8 15:11:52.256: INFO: Created: latency-svc-cv829 Mar 8 15:11:52.280: INFO: Got endpoints: latency-svc-cv829 [869.654579ms] Mar 8 15:11:52.316: INFO: Created: latency-svc-26kp5 Mar 8 15:11:52.362: INFO: Got endpoints: latency-svc-26kp5 [903.578997ms] Mar 8 15:11:52.370: INFO: Created: latency-svc-57g55 Mar 8 15:11:52.381: INFO: Got endpoints: latency-svc-57g55 [795.873052ms] Mar 8 15:11:52.412: INFO: Created: latency-svc-cb8cq Mar 8 15:11:52.423: INFO: Got endpoints: latency-svc-cb8cq [820.588213ms] Mar 8 15:11:52.602: INFO: Created: latency-svc-d67sf Mar 8 15:11:52.647: INFO: Got endpoints: latency-svc-d67sf [960.042242ms] Mar 8 15:11:52.648: INFO: Created: latency-svc-mgbsc Mar 8 15:11:52.681: INFO: Got endpoints: latency-svc-mgbsc [988.581367ms] Mar 8 15:11:52.758: INFO: Created: latency-svc-qcf7m Mar 8 15:11:52.764: INFO: Got endpoints: latency-svc-qcf7m [1.042690294s] Mar 8 15:11:52.815: INFO: Created: latency-svc-vr86d Mar 8 15:11:52.830: INFO: Got endpoints: latency-svc-vr86d [1.065442367s] Mar 8 15:11:52.919: INFO: Created: latency-svc-dfxl7 Mar 8 15:11:52.971: INFO: Got endpoints: latency-svc-dfxl7 [1.111633318s] Mar 8 15:11:52.971: INFO: Created: latency-svc-cnvv9 Mar 8 15:11:52.980: INFO: Got endpoints: latency-svc-cnvv9 [1.100935765s] Mar 8 15:11:53.105: INFO: Created: latency-svc-t6mzv Mar 8 15:11:53.175: INFO: Created: latency-svc-h9dlk Mar 8 15:11:53.175: INFO: Got endpoints: latency-svc-t6mzv [1.290317217s] Mar 8 15:11:53.184: INFO: Got endpoints: latency-svc-h9dlk [1.107967784s] Mar 8 15:11:53.291: INFO: Created: latency-svc-xqkdw Mar 8 15:11:53.332: INFO: Got endpoints: latency-svc-xqkdw [1.237767365s] Mar 8 15:11:53.333: INFO: Created: latency-svc-gdnbk Mar 8 15:11:53.346: INFO: Got endpoints: latency-svc-gdnbk [1.127229133s] Mar 8 15:11:53.380: INFO: Created: latency-svc-m2x7f Mar 8 15:11:53.440: INFO: Got endpoints: latency-svc-m2x7f [1.214983634s] Mar 8 15:11:53.442: INFO: Created: latency-svc-w9jgm Mar 8 15:11:53.454: INFO: Got endpoints: latency-svc-w9jgm [1.17380828s] Mar 8 15:11:53.495: INFO: Created: latency-svc-5kfcs Mar 8 15:11:53.520: INFO: Got endpoints: latency-svc-5kfcs [1.157316071s] Mar 8 15:11:53.590: INFO: Created: latency-svc-554js Mar 8 15:11:53.620: INFO: Got endpoints: latency-svc-554js [1.238769504s] Mar 8 15:11:53.651: INFO: Created: latency-svc-zmlxv Mar 8 15:11:53.675: INFO: Got endpoints: latency-svc-zmlxv [1.251171552s] Mar 8 15:11:53.794: INFO: Created: latency-svc-4gd9f Mar 8 15:11:53.832: INFO: Created: latency-svc-cd4lp Mar 8 15:11:53.832: INFO: Got endpoints: latency-svc-4gd9f [1.185591676s] Mar 8 15:11:53.873: INFO: Got endpoints: latency-svc-cd4lp [1.192107356s] Mar 8 15:11:53.973: INFO: Created: latency-svc-lxh7c Mar 8 15:11:54.029: INFO: Got endpoints: latency-svc-lxh7c [1.265019328s] Mar 8 15:11:54.030: INFO: Created: latency-svc-klg29 Mar 8 15:11:54.058: INFO: Got endpoints: latency-svc-klg29 [1.22795774s] Mar 8 15:11:54.171: INFO: Created: latency-svc-cvvg8 Mar 8 15:11:54.211: INFO: Created: latency-svc-gkz8b Mar 8 15:11:54.211: INFO: Got endpoints: latency-svc-cvvg8 [1.240055404s] Mar 8 15:11:54.248: INFO: Got endpoints: latency-svc-gkz8b [1.268383484s] Mar 8 15:11:54.249: INFO: Created: latency-svc-5bjc2 Mar 8 15:11:54.262: INFO: Got endpoints: latency-svc-5bjc2 [1.087315668s] Mar 8 15:11:54.345: INFO: Created: latency-svc-646lh Mar 8 15:11:54.379: INFO: Got endpoints: latency-svc-646lh [1.195146966s] Mar 8 15:11:54.379: INFO: Created: latency-svc-82pvq Mar 8 15:11:54.388: INFO: Got endpoints: latency-svc-82pvq [1.056890665s] Mar 8 15:11:54.415: INFO: Created: latency-svc-xgtvg Mar 8 15:11:54.424: INFO: Got endpoints: latency-svc-xgtvg [1.077516324s] Mar 8 15:11:54.506: INFO: Created: latency-svc-qkbcf Mar 8 15:11:54.529: INFO: Got endpoints: latency-svc-qkbcf [1.088944553s] Mar 8 15:11:54.530: INFO: Created: latency-svc-9c684 Mar 8 15:11:54.538: INFO: Got endpoints: latency-svc-9c684 [1.083642714s] Mar 8 15:11:54.566: INFO: Created: latency-svc-xqnzn Mar 8 15:11:54.573: INFO: Got endpoints: latency-svc-xqnzn [1.053321799s] Mar 8 15:11:54.595: INFO: Created: latency-svc-s7w97 Mar 8 15:11:54.603: INFO: Got endpoints: latency-svc-s7w97 [983.033806ms] Mar 8 15:11:54.656: INFO: Created: latency-svc-44925 Mar 8 15:11:54.663: INFO: Got endpoints: latency-svc-44925 [988.532228ms] Mar 8 15:11:54.685: INFO: Created: latency-svc-h9k89 Mar 8 15:11:54.699: INFO: Got endpoints: latency-svc-h9k89 [866.764478ms] Mar 8 15:11:54.721: INFO: Created: latency-svc-dq2fc Mar 8 15:11:54.729: INFO: Got endpoints: latency-svc-dq2fc [856.076928ms] Mar 8 15:11:54.794: INFO: Created: latency-svc-88h2n Mar 8 15:11:54.811: INFO: Created: latency-svc-p2pzw Mar 8 15:11:54.812: INFO: Got endpoints: latency-svc-88h2n [782.728939ms] Mar 8 15:11:54.830: INFO: Got endpoints: latency-svc-p2pzw [771.406128ms] Mar 8 15:11:54.830: INFO: Created: latency-svc-w78h9 Mar 8 15:11:54.837: INFO: Got endpoints: latency-svc-w78h9 [625.95616ms] Mar 8 15:11:54.857: INFO: Created: latency-svc-q2glf Mar 8 15:11:54.861: INFO: Got endpoints: latency-svc-q2glf [613.085625ms] Mar 8 15:11:54.875: INFO: Created: latency-svc-5mlxm Mar 8 15:11:54.882: INFO: Got endpoints: latency-svc-5mlxm [619.685966ms] Mar 8 15:11:54.893: INFO: Created: latency-svc-mmmbv Mar 8 15:11:54.931: INFO: Got endpoints: latency-svc-mmmbv [552.153162ms] Mar 8 15:11:54.941: INFO: Created: latency-svc-wz5nn Mar 8 15:11:54.951: INFO: Got endpoints: latency-svc-wz5nn [562.655753ms] Mar 8 15:11:54.977: INFO: Created: latency-svc-jscld Mar 8 15:11:54.987: INFO: Got endpoints: latency-svc-jscld [563.455536ms] Mar 8 15:11:55.009: INFO: Created: latency-svc-b57rh Mar 8 15:11:55.063: INFO: Created: latency-svc-7v2tf Mar 8 15:11:55.063: INFO: Got endpoints: latency-svc-b57rh [534.375218ms] Mar 8 15:11:55.086: INFO: Got endpoints: latency-svc-7v2tf [548.099957ms] Mar 8 15:11:55.086: INFO: Created: latency-svc-749fk Mar 8 15:11:55.094: INFO: Got endpoints: latency-svc-749fk [521.258269ms] Mar 8 15:11:55.109: INFO: Created: latency-svc-ctv4k Mar 8 15:11:55.112: INFO: Got endpoints: latency-svc-ctv4k [508.979403ms] Mar 8 15:11:55.134: INFO: Created: latency-svc-bpmkv Mar 8 15:11:55.158: INFO: Created: latency-svc-rdpkk Mar 8 15:11:55.158: INFO: Got endpoints: latency-svc-bpmkv [494.476639ms] Mar 8 15:11:55.160: INFO: Got endpoints: latency-svc-rdpkk [461.32765ms] Mar 8 15:11:55.202: INFO: Created: latency-svc-6j2fs Mar 8 15:11:55.214: INFO: Got endpoints: latency-svc-6j2fs [485.454746ms] Mar 8 15:11:55.260: INFO: Created: latency-svc-l4rmk Mar 8 15:11:55.269: INFO: Got endpoints: latency-svc-l4rmk [456.453044ms] Mar 8 15:11:55.350: INFO: Created: latency-svc-4bjq2 Mar 8 15:11:55.410: INFO: Got endpoints: latency-svc-4bjq2 [579.964672ms] Mar 8 15:11:55.410: INFO: Created: latency-svc-2jwrc Mar 8 15:11:55.542: INFO: Got endpoints: latency-svc-2jwrc [705.133136ms] Mar 8 15:11:55.556: INFO: Created: latency-svc-gr49k Mar 8 15:11:55.569: INFO: Got endpoints: latency-svc-gr49k [707.671316ms] Mar 8 15:11:55.704: INFO: Created: latency-svc-bhprz Mar 8 15:11:55.730: INFO: Got endpoints: latency-svc-bhprz [847.284312ms] Mar 8 15:11:55.730: INFO: Created: latency-svc-s5f4g Mar 8 15:11:55.754: INFO: Got endpoints: latency-svc-s5f4g [822.758884ms] Mar 8 15:11:55.777: INFO: Created: latency-svc-t2s6r Mar 8 15:11:55.784: INFO: Got endpoints: latency-svc-t2s6r [832.96897ms] Mar 8 15:11:55.801: INFO: Created: latency-svc-qftph Mar 8 15:11:55.835: INFO: Got endpoints: latency-svc-qftph [847.963117ms] Mar 8 15:11:55.836: INFO: Created: latency-svc-blk42 Mar 8 15:11:55.843: INFO: Got endpoints: latency-svc-blk42 [779.499947ms] Mar 8 15:11:55.867: INFO: Created: latency-svc-62psf Mar 8 15:11:55.873: INFO: Got endpoints: latency-svc-62psf [787.39085ms] Mar 8 15:11:55.873: INFO: Latencies: [75.976259ms 77.790178ms 123.97364ms 168.42705ms 180.261758ms 226.099565ms 319.711831ms 456.453044ms 461.32765ms 468.175493ms 485.454746ms 494.476639ms 500.98463ms 508.979403ms 521.258269ms 534.375218ms 536.670646ms 548.099957ms 552.153162ms 560.659175ms 562.655753ms 563.455536ms 579.964672ms 603.406309ms 613.085625ms 619.685966ms 622.882678ms 625.95616ms 633.239166ms 634.459861ms 634.957577ms 635.373639ms 637.360418ms 647.446902ms 651.98283ms 652.406815ms 652.573282ms 658.823087ms 664.617689ms 665.268127ms 666.938725ms 670.080521ms 670.923173ms 673.343791ms 673.794465ms 678.419004ms 682.040156ms 683.295611ms 684.372568ms 686.11115ms 687.822402ms 700.094162ms 700.175748ms 705.133136ms 705.256054ms 705.464318ms 707.671316ms 718.621141ms 729.192432ms 729.443629ms 730.858152ms 732.90525ms 733.681442ms 741.103912ms 742.857477ms 745.377669ms 754.883983ms 761.476892ms 762.035152ms 766.790984ms 771.406128ms 779.455854ms 779.499947ms 782.728939ms 784.013651ms 785.026805ms 785.037396ms 786.046142ms 787.39085ms 795.873052ms 797.507437ms 798.659682ms 799.722872ms 801.356454ms 805.068917ms 807.222057ms 807.557992ms 816.159671ms 820.588213ms 821.27826ms 821.532119ms 822.758884ms 832.96897ms 834.127991ms 847.284312ms 847.963117ms 850.958434ms 856.076928ms 862.573577ms 866.764478ms 869.654579ms 870.522031ms 903.578997ms 905.211746ms 925.676713ms 960.042242ms 967.775972ms 977.031761ms 983.033806ms 988.532228ms 988.581367ms 1.042690294s 1.053321799s 1.056890665s 1.065442367s 1.066708905s 1.066998495s 1.077516324s 1.083642714s 1.085363798s 1.087315668s 1.088944553s 1.100935765s 1.107967784s 1.111633318s 1.112615257s 1.11264451s 1.118199362s 1.127229133s 1.143070089s 1.144802605s 1.150305108s 1.154768505s 1.157316071s 1.157424578s 1.160980343s 1.168182152s 1.173493574s 1.17380828s 1.185591676s 1.192107356s 1.192332274s 1.195146966s 1.198732256s 1.210568185s 1.211261143s 1.214983634s 1.22795774s 1.237767365s 1.238769504s 1.240055404s 1.241723945s 1.251171552s 1.265019328s 1.268383484s 1.269856386s 1.282981445s 1.290317217s 1.295936792s 1.302244178s 1.325481291s 1.365812936s 1.375802563s 1.402363166s 1.445023454s 1.466151946s 1.484425887s 1.541122866s 1.554794117s 1.56934031s 1.607853631s 1.609923652s 1.614843506s 1.619384806s 1.626039558s 1.634778608s 1.668384547s 1.67369201s 1.676677019s 1.695438964s 1.700703627s 1.710050257s 1.714849771s 1.718926851s 1.73903098s 1.782707222s 1.785698717s 1.788218016s 1.820427664s 1.825005997s 1.832937011s 1.844642716s 1.847970976s 1.86387235s 1.875713594s 1.884284398s 1.887314267s 1.92321734s 2.017121184s 2.073997234s] Mar 8 15:11:55.873: INFO: 50 %ile: 869.654579ms Mar 8 15:11:55.873: INFO: 90 %ile: 1.700703627s Mar 8 15:11:55.873: INFO: 99 %ile: 2.017121184s Mar 8 15:11:55.873: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:11:55.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7946" for this suite. • [SLOW TEST:16.061 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":280,"completed":15,"skipped":236,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:11:55.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-648, will wait for the garbage collector to delete the pods Mar 8 15:12:00.046: INFO: Deleting Job.batch foo took: 7.52749ms Mar 8 15:12:00.346: INFO: Terminating Job.batch foo pods took: 300.213946ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:12:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-648" for this suite. • [SLOW TEST:46.664 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":16,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:12:42.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-6c1eaa02-9c3b-47ef-8dbb-51aaf5b3acca STEP: Creating a pod to test consume secrets Mar 8 15:12:42.697: INFO: Waiting up to 5m0s for pod "pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414" in namespace "secrets-628" to be "success or failure" Mar 8 15:12:42.723: INFO: Pod "pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414": Phase="Pending", Reason="", readiness=false. Elapsed: 25.937364ms Mar 8 15:12:44.727: INFO: Pod "pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030534396s Mar 8 15:12:46.731: INFO: Pod "pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034652437s STEP: Saw pod success Mar 8 15:12:46.732: INFO: Pod "pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414" satisfied condition "success or failure" Mar 8 15:12:46.734: INFO: Trying to get logs from node latest-worker pod pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414 container secret-volume-test: STEP: delete the pod Mar 8 15:12:46.771: INFO: Waiting for pod pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414 to disappear Mar 8 15:12:46.812: INFO: Pod pod-secrets-ffc9ee15-4e7c-4d44-8f61-3ee557bfc414 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:12:46.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-628" for this suite. STEP: Destroying namespace "secret-namespace-913" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":17,"skipped":283,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:12:46.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:12:46.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc" in namespace "projected-733" to be "success or failure" Mar 8 15:12:46.973: INFO: Pod "downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.566302ms Mar 8 15:12:48.977: INFO: Pod "downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021296101s STEP: Saw pod success Mar 8 15:12:48.977: INFO: Pod "downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc" satisfied condition "success or failure" Mar 8 15:12:48.979: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc container client-container: STEP: delete the pod Mar 8 15:12:49.006: INFO: Waiting for pod downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc to disappear Mar 8 15:12:49.029: INFO: Pod downwardapi-volume-1264381a-13e0-4308-ae7e-9d0e763adefc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:12:49.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-733" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":290,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:12:49.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:12:49.503: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:12:51.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277169, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277169, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277169, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277169, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:12:54.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:13:04.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2365" for this suite. STEP: Destroying namespace "webhook-2365-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.854 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":19,"skipped":299,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:13:04.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 15:13:05.755: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 15:13:07.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277185, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277185, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277185, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277185, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:13:10.792: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:13:10.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:13:12.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7057" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.397 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":20,"skipped":314,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:13:12.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-aeb66d7e-5661-421c-9d68-c737c1aaf53f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-aeb66d7e-5661-421c-9d68-c737c1aaf53f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:13:16.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6571" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":329,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:13:16.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 15:13:16.525: INFO: Waiting up to 5m0s for pod "pod-1e00f5c2-e01f-4c21-9500-cba729691958" in namespace "emptydir-4464" to be "success or failure" Mar 8 15:13:16.528: INFO: Pod "pod-1e00f5c2-e01f-4c21-9500-cba729691958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300462ms Mar 8 15:13:18.531: INFO: Pod "pod-1e00f5c2-e01f-4c21-9500-cba729691958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005358644s STEP: Saw pod success Mar 8 15:13:18.531: INFO: Pod "pod-1e00f5c2-e01f-4c21-9500-cba729691958" satisfied condition "success or failure" Mar 8 15:13:18.532: INFO: Trying to get logs from node latest-worker pod pod-1e00f5c2-e01f-4c21-9500-cba729691958 container test-container: STEP: delete the pod Mar 8 15:13:18.558: INFO: Waiting for pod pod-1e00f5c2-e01f-4c21-9500-cba729691958 to disappear Mar 8 15:13:18.563: INFO: Pod pod-1e00f5c2-e01f-4c21-9500-cba729691958 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:13:18.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4464" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":329,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:13:18.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:14:18.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-155" for this suite. • [SLOW TEST:60.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":23,"skipped":329,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:14:18.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Mar 8 15:14:18.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5075' Mar 8 15:14:19.077: INFO: stderr: "" Mar 8 15:14:19.077: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:14:19.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5075' Mar 8 15:14:19.229: INFO: stderr: "" Mar 8 15:14:19.229: INFO: stdout: "update-demo-nautilus-jgsj5 update-demo-nautilus-lnc5q " Mar 8 15:14:19.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgsj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:19.313: INFO: stderr: "" Mar 8 15:14:19.313: INFO: stdout: "" Mar 8 15:14:19.313: INFO: update-demo-nautilus-jgsj5 is created but not running Mar 8 15:14:24.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5075' Mar 8 15:14:24.390: INFO: stderr: "" Mar 8 15:14:24.390: INFO: stdout: "update-demo-nautilus-jgsj5 update-demo-nautilus-lnc5q " Mar 8 15:14:24.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgsj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:24.459: INFO: stderr: "" Mar 8 15:14:24.459: INFO: stdout: "true" Mar 8 15:14:24.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgsj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:24.523: INFO: stderr: "" Mar 8 15:14:24.523: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:14:24.523: INFO: validating pod update-demo-nautilus-jgsj5 Mar 8 15:14:24.525: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:14:24.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:14:24.525: INFO: update-demo-nautilus-jgsj5 is verified up and running Mar 8 15:14:24.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnc5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:24.587: INFO: stderr: "" Mar 8 15:14:24.587: INFO: stdout: "true" Mar 8 15:14:24.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnc5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:24.646: INFO: stderr: "" Mar 8 15:14:24.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:14:24.646: INFO: validating pod update-demo-nautilus-lnc5q Mar 8 15:14:24.649: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:14:24.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:14:24.649: INFO: update-demo-nautilus-lnc5q is verified up and running STEP: rolling-update to new replication controller Mar 8 15:14:24.651: INFO: scanned /root for discovery docs: Mar 8 15:14:24.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5075' Mar 8 15:14:47.332: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 15:14:47.332: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:14:47.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5075' Mar 8 15:14:47.442: INFO: stderr: "" Mar 8 15:14:47.442: INFO: stdout: "update-demo-kitten-cfg9c update-demo-kitten-wbm7l " Mar 8 15:14:47.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-cfg9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:47.550: INFO: stderr: "" Mar 8 15:14:47.550: INFO: stdout: "true" Mar 8 15:14:47.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-cfg9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:47.653: INFO: stderr: "" Mar 8 15:14:47.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 15:14:47.653: INFO: validating pod update-demo-kitten-cfg9c Mar 8 15:14:47.658: INFO: got data: { "image": "kitten.jpg" } Mar 8 15:14:47.658: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 15:14:47.658: INFO: update-demo-kitten-cfg9c is verified up and running Mar 8 15:14:47.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-wbm7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:47.738: INFO: stderr: "" Mar 8 15:14:47.738: INFO: stdout: "true" Mar 8 15:14:47.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-wbm7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5075' Mar 8 15:14:47.855: INFO: stderr: "" Mar 8 15:14:47.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 15:14:47.855: INFO: validating pod update-demo-kitten-wbm7l Mar 8 15:14:47.858: INFO: got data: { "image": "kitten.jpg" } Mar 8 15:14:47.858: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 15:14:47.858: INFO: update-demo-kitten-wbm7l is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:14:47.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5075" for this suite. • [SLOW TEST:29.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":24,"skipped":342,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:14:47.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:14:47.996: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-bd0dda61-ffc1-4551-807b-4dc02658f98e" in namespace "security-context-test-7338" to be "success or failure" Mar 8 15:14:48.007: INFO: Pod "alpine-nnp-false-bd0dda61-ffc1-4551-807b-4dc02658f98e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.686547ms Mar 8 15:14:50.011: INFO: Pod "alpine-nnp-false-bd0dda61-ffc1-4551-807b-4dc02658f98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014895647s Mar 8 15:14:52.019: INFO: Pod "alpine-nnp-false-bd0dda61-ffc1-4551-807b-4dc02658f98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022034145s Mar 8 15:14:52.019: INFO: Pod "alpine-nnp-false-bd0dda61-ffc1-4551-807b-4dc02658f98e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:14:52.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7338" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":349,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:14:52.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2638 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2638 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2638 Mar 8 15:14:52.171: INFO: Found 0 stateful pods, waiting for 1 Mar 8 15:15:02.175: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 8 15:15:02.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:15:02.397: INFO: stderr: "I0308 15:15:02.315113 499 log.go:172] (0xc0008f8000) (0xc000908000) Create stream\nI0308 15:15:02.315153 499 log.go:172] (0xc0008f8000) (0xc000908000) Stream added, broadcasting: 1\nI0308 15:15:02.316997 499 log.go:172] (0xc0008f8000) Reply frame received for 1\nI0308 15:15:02.317022 499 log.go:172] (0xc0008f8000) (0xc00040b220) Create stream\nI0308 15:15:02.317032 499 log.go:172] (0xc0008f8000) (0xc00040b220) Stream added, broadcasting: 3\nI0308 15:15:02.317653 499 log.go:172] (0xc0008f8000) Reply frame received for 3\nI0308 15:15:02.317680 499 log.go:172] (0xc0008f8000) (0xc0009080a0) Create stream\nI0308 15:15:02.317688 499 log.go:172] (0xc0008f8000) (0xc0009080a0) Stream added, broadcasting: 5\nI0308 15:15:02.318378 499 log.go:172] (0xc0008f8000) Reply frame received for 5\nI0308 15:15:02.372056 499 log.go:172] (0xc0008f8000) Data frame received for 5\nI0308 15:15:02.372079 499 log.go:172] (0xc0009080a0) (5) Data frame handling\nI0308 15:15:02.372090 499 log.go:172] (0xc0009080a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:15:02.393286 499 log.go:172] (0xc0008f8000) Data frame received for 3\nI0308 15:15:02.393306 499 log.go:172] (0xc00040b220) (3) Data frame handling\nI0308 15:15:02.393359 499 log.go:172] (0xc00040b220) (3) Data frame sent\nI0308 15:15:02.393392 499 log.go:172] (0xc0008f8000) Data frame received for 3\nI0308 15:15:02.393399 499 log.go:172] (0xc00040b220) (3) Data frame handling\nI0308 15:15:02.393439 499 log.go:172] (0xc0008f8000) Data frame received for 5\nI0308 15:15:02.393455 499 log.go:172] (0xc0009080a0) (5) Data frame handling\nI0308 15:15:02.394716 499 log.go:172] (0xc0008f8000) Data frame received for 1\nI0308 15:15:02.394741 499 log.go:172] (0xc000908000) (1) Data frame handling\nI0308 15:15:02.394752 499 log.go:172] (0xc000908000) (1) Data frame sent\nI0308 15:15:02.394764 499 log.go:172] (0xc0008f8000) (0xc000908000) Stream removed, broadcasting: 1\nI0308 15:15:02.394785 499 log.go:172] (0xc0008f8000) Go away received\nI0308 15:15:02.395077 499 log.go:172] (0xc0008f8000) (0xc000908000) Stream removed, broadcasting: 1\nI0308 15:15:02.395103 499 log.go:172] (0xc0008f8000) (0xc00040b220) Stream removed, broadcasting: 3\nI0308 15:15:02.395110 499 log.go:172] (0xc0008f8000) (0xc0009080a0) Stream removed, broadcasting: 5\n" Mar 8 15:15:02.397: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:15:02.397: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:15:02.400: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 15:15:12.405: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:15:12.405: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:15:12.417: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999507s Mar 8 15:15:13.421: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995851541s Mar 8 15:15:14.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991745886s Mar 8 15:15:15.430: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987772849s Mar 8 15:15:16.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983216042s Mar 8 15:15:17.438: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978979438s Mar 8 15:15:18.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974806825s Mar 8 15:15:19.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.970110489s Mar 8 15:15:20.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.965468956s Mar 8 15:15:21.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.282164ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2638 Mar 8 15:15:22.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:15:22.628: INFO: stderr: "I0308 15:15:22.562947 520 log.go:172] (0xc000a86e70) (0xc0007339a0) Create stream\nI0308 15:15:22.563004 520 log.go:172] (0xc000a86e70) (0xc0007339a0) Stream added, broadcasting: 1\nI0308 15:15:22.564468 520 log.go:172] (0xc000a86e70) Reply frame received for 1\nI0308 15:15:22.564500 520 log.go:172] (0xc000a86e70) (0xc0006eac80) Create stream\nI0308 15:15:22.564512 520 log.go:172] (0xc000a86e70) (0xc0006eac80) Stream added, broadcasting: 3\nI0308 15:15:22.565222 520 log.go:172] (0xc000a86e70) Reply frame received for 3\nI0308 15:15:22.565247 520 log.go:172] (0xc000a86e70) (0xc0006ead20) Create stream\nI0308 15:15:22.565256 520 log.go:172] (0xc000a86e70) (0xc0006ead20) Stream added, broadcasting: 5\nI0308 15:15:22.565872 520 log.go:172] (0xc000a86e70) Reply frame received for 5\nI0308 15:15:22.624327 520 log.go:172] (0xc000a86e70) Data frame received for 5\nI0308 15:15:22.624349 520 log.go:172] (0xc0006ead20) (5) Data frame handling\nI0308 15:15:22.624362 520 log.go:172] (0xc0006ead20) (5) Data frame sent\nI0308 15:15:22.624367 520 log.go:172] (0xc000a86e70) Data frame received for 5\nI0308 15:15:22.624373 520 log.go:172] (0xc0006ead20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:15:22.624663 520 log.go:172] (0xc000a86e70) Data frame received for 3\nI0308 15:15:22.624684 520 log.go:172] (0xc0006eac80) (3) Data frame handling\nI0308 15:15:22.624699 520 log.go:172] (0xc0006eac80) (3) Data frame sent\nI0308 15:15:22.624710 520 log.go:172] (0xc000a86e70) Data frame received for 3\nI0308 15:15:22.624722 520 log.go:172] (0xc0006eac80) (3) Data frame handling\nI0308 15:15:22.625744 520 log.go:172] (0xc000a86e70) Data frame received for 1\nI0308 15:15:22.625757 520 log.go:172] (0xc0007339a0) (1) Data frame handling\nI0308 15:15:22.625764 520 log.go:172] (0xc0007339a0) (1) Data frame sent\nI0308 15:15:22.625773 520 log.go:172] (0xc000a86e70) (0xc0007339a0) Stream removed, broadcasting: 1\nI0308 15:15:22.625785 520 log.go:172] (0xc000a86e70) Go away received\nI0308 15:15:22.626069 520 log.go:172] (0xc000a86e70) (0xc0007339a0) Stream removed, broadcasting: 1\nI0308 15:15:22.626084 520 log.go:172] (0xc000a86e70) (0xc0006eac80) Stream removed, broadcasting: 3\nI0308 15:15:22.626093 520 log.go:172] (0xc000a86e70) (0xc0006ead20) Stream removed, broadcasting: 5\n" Mar 8 15:15:22.628: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:15:22.628: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:15:22.632: INFO: Found 1 stateful pods, waiting for 3 Mar 8 15:15:32.852: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:15:32.852: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:15:32.852: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 8 15:15:42.636: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:15:42.636: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:15:42.636: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 8 15:15:42.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:15:42.854: INFO: stderr: "I0308 15:15:42.783898 540 log.go:172] (0xc0003c4fd0) (0xc000685ae0) Create stream\nI0308 15:15:42.783971 540 log.go:172] (0xc0003c4fd0) (0xc000685ae0) Stream added, broadcasting: 1\nI0308 15:15:42.786331 540 log.go:172] (0xc0003c4fd0) Reply frame received for 1\nI0308 15:15:42.786386 540 log.go:172] (0xc0003c4fd0) (0xc000160000) Create stream\nI0308 15:15:42.786403 540 log.go:172] (0xc0003c4fd0) (0xc000160000) Stream added, broadcasting: 3\nI0308 15:15:42.787366 540 log.go:172] (0xc0003c4fd0) Reply frame received for 3\nI0308 15:15:42.787413 540 log.go:172] (0xc0003c4fd0) (0xc0001dc000) Create stream\nI0308 15:15:42.787423 540 log.go:172] (0xc0003c4fd0) (0xc0001dc000) Stream added, broadcasting: 5\nI0308 15:15:42.788363 540 log.go:172] (0xc0003c4fd0) Reply frame received for 5\nI0308 15:15:42.849466 540 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0308 15:15:42.849513 540 log.go:172] (0xc0001dc000) (5) Data frame handling\nI0308 15:15:42.849534 540 log.go:172] (0xc0001dc000) (5) Data frame sent\nI0308 15:15:42.849546 540 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0308 15:15:42.849556 540 log.go:172] (0xc0001dc000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:15:42.849599 540 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0308 15:15:42.849623 540 log.go:172] (0xc000160000) (3) Data frame handling\nI0308 15:15:42.849647 540 log.go:172] (0xc000160000) (3) Data frame sent\nI0308 15:15:42.849663 540 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0308 15:15:42.849670 540 log.go:172] (0xc000160000) (3) Data frame handling\nI0308 15:15:42.850721 540 log.go:172] (0xc0003c4fd0) Data frame received for 1\nI0308 15:15:42.850739 540 log.go:172] (0xc000685ae0) (1) Data frame handling\nI0308 15:15:42.850748 540 log.go:172] (0xc000685ae0) (1) Data frame sent\nI0308 15:15:42.850759 540 log.go:172] (0xc0003c4fd0) (0xc000685ae0) Stream removed, broadcasting: 1\nI0308 15:15:42.850781 540 log.go:172] (0xc0003c4fd0) Go away received\nI0308 15:15:42.851105 540 log.go:172] (0xc0003c4fd0) (0xc000685ae0) Stream removed, broadcasting: 1\nI0308 15:15:42.851125 540 log.go:172] (0xc0003c4fd0) (0xc000160000) Stream removed, broadcasting: 3\nI0308 15:15:42.851133 540 log.go:172] (0xc0003c4fd0) (0xc0001dc000) Stream removed, broadcasting: 5\n" Mar 8 15:15:42.854: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:15:42.854: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:15:42.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:15:43.064: INFO: stderr: "I0308 15:15:42.964342 563 log.go:172] (0xc000a1a8f0) (0xc000a9c320) Create stream\nI0308 15:15:42.964386 563 log.go:172] (0xc000a1a8f0) (0xc000a9c320) Stream added, broadcasting: 1\nI0308 15:15:42.967652 563 log.go:172] (0xc000a1a8f0) Reply frame received for 1\nI0308 15:15:42.967681 563 log.go:172] (0xc000a1a8f0) (0xc0005be640) Create stream\nI0308 15:15:42.967688 563 log.go:172] (0xc000a1a8f0) (0xc0005be640) Stream added, broadcasting: 3\nI0308 15:15:42.968304 563 log.go:172] (0xc000a1a8f0) Reply frame received for 3\nI0308 15:15:42.968337 563 log.go:172] (0xc000a1a8f0) (0xc00026f2c0) Create stream\nI0308 15:15:42.968352 563 log.go:172] (0xc000a1a8f0) (0xc00026f2c0) Stream added, broadcasting: 5\nI0308 15:15:42.968978 563 log.go:172] (0xc000a1a8f0) Reply frame received for 5\nI0308 15:15:43.039821 563 log.go:172] (0xc000a1a8f0) Data frame received for 5\nI0308 15:15:43.039840 563 log.go:172] (0xc00026f2c0) (5) Data frame handling\nI0308 15:15:43.039852 563 log.go:172] (0xc00026f2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:15:43.059753 563 log.go:172] (0xc000a1a8f0) Data frame received for 3\nI0308 15:15:43.059782 563 log.go:172] (0xc0005be640) (3) Data frame handling\nI0308 15:15:43.059790 563 log.go:172] (0xc0005be640) (3) Data frame sent\nI0308 15:15:43.059803 563 log.go:172] (0xc000a1a8f0) Data frame received for 5\nI0308 15:15:43.059809 563 log.go:172] (0xc00026f2c0) (5) Data frame handling\nI0308 15:15:43.060120 563 log.go:172] (0xc000a1a8f0) Data frame received for 3\nI0308 15:15:43.060143 563 log.go:172] (0xc0005be640) (3) Data frame handling\nI0308 15:15:43.061256 563 log.go:172] (0xc000a1a8f0) Data frame received for 1\nI0308 15:15:43.061267 563 log.go:172] (0xc000a9c320) (1) Data frame handling\nI0308 15:15:43.061279 563 log.go:172] (0xc000a9c320) (1) Data frame sent\nI0308 15:15:43.061367 563 log.go:172] (0xc000a1a8f0) (0xc000a9c320) Stream removed, broadcasting: 1\nI0308 15:15:43.061387 563 log.go:172] (0xc000a1a8f0) Go away received\nI0308 15:15:43.061699 563 log.go:172] (0xc000a1a8f0) (0xc000a9c320) Stream removed, broadcasting: 1\nI0308 15:15:43.061717 563 log.go:172] (0xc000a1a8f0) (0xc0005be640) Stream removed, broadcasting: 3\nI0308 15:15:43.061726 563 log.go:172] (0xc000a1a8f0) (0xc00026f2c0) Stream removed, broadcasting: 5\n" Mar 8 15:15:43.064: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:15:43.064: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:15:43.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:15:43.237: INFO: stderr: "I0308 15:15:43.152899 583 log.go:172] (0xc000ac46e0) (0xc0006a1ea0) Create stream\nI0308 15:15:43.152935 583 log.go:172] (0xc000ac46e0) (0xc0006a1ea0) Stream added, broadcasting: 1\nI0308 15:15:43.154319 583 log.go:172] (0xc000ac46e0) Reply frame received for 1\nI0308 15:15:43.154343 583 log.go:172] (0xc000ac46e0) (0xc000600780) Create stream\nI0308 15:15:43.154351 583 log.go:172] (0xc000ac46e0) (0xc000600780) Stream added, broadcasting: 3\nI0308 15:15:43.154790 583 log.go:172] (0xc000ac46e0) Reply frame received for 3\nI0308 15:15:43.154810 583 log.go:172] (0xc000ac46e0) (0xc000713400) Create stream\nI0308 15:15:43.154817 583 log.go:172] (0xc000ac46e0) (0xc000713400) Stream added, broadcasting: 5\nI0308 15:15:43.155208 583 log.go:172] (0xc000ac46e0) Reply frame received for 5\nI0308 15:15:43.207027 583 log.go:172] (0xc000ac46e0) Data frame received for 5\nI0308 15:15:43.207052 583 log.go:172] (0xc000713400) (5) Data frame handling\nI0308 15:15:43.207075 583 log.go:172] (0xc000713400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:15:43.234008 583 log.go:172] (0xc000ac46e0) Data frame received for 3\nI0308 15:15:43.234028 583 log.go:172] (0xc000600780) (3) Data frame handling\nI0308 15:15:43.234044 583 log.go:172] (0xc000600780) (3) Data frame sent\nI0308 15:15:43.234051 583 log.go:172] (0xc000ac46e0) Data frame received for 3\nI0308 15:15:43.234056 583 log.go:172] (0xc000600780) (3) Data frame handling\nI0308 15:15:43.234164 583 log.go:172] (0xc000ac46e0) Data frame received for 5\nI0308 15:15:43.234173 583 log.go:172] (0xc000713400) (5) Data frame handling\nI0308 15:15:43.235016 583 log.go:172] (0xc000ac46e0) Data frame received for 1\nI0308 15:15:43.235029 583 log.go:172] (0xc0006a1ea0) (1) Data frame handling\nI0308 15:15:43.235042 583 log.go:172] (0xc0006a1ea0) (1) Data frame sent\nI0308 15:15:43.235050 583 log.go:172] (0xc000ac46e0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0308 15:15:43.235078 583 log.go:172] (0xc000ac46e0) Go away received\nI0308 15:15:43.235242 583 log.go:172] (0xc000ac46e0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0308 15:15:43.235250 583 log.go:172] (0xc000ac46e0) (0xc000600780) Stream removed, broadcasting: 3\nI0308 15:15:43.235254 583 log.go:172] (0xc000ac46e0) (0xc000713400) Stream removed, broadcasting: 5\n" Mar 8 15:15:43.237: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:15:43.237: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:15:43.237: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:15:43.239: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 8 15:15:53.247: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:15:53.247: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:15:53.247: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:15:53.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999354s Mar 8 15:15:54.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.910430016s Mar 8 15:15:55.352: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.905262557s Mar 8 15:15:56.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.899958693s Mar 8 15:15:57.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.895641985s Mar 8 15:15:58.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.890843162s Mar 8 15:15:59.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.885494501s Mar 8 15:16:00.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.881480244s Mar 8 15:16:01.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.877278563s Mar 8 15:16:02.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 872.735833ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2638 Mar 8 15:16:03.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:16:03.583: INFO: stderr: "I0308 15:16:03.521335 604 log.go:172] (0xc00003a790) (0xc0009e0000) Create stream\nI0308 15:16:03.521382 604 log.go:172] (0xc00003a790) (0xc0009e0000) Stream added, broadcasting: 1\nI0308 15:16:03.523635 604 log.go:172] (0xc00003a790) Reply frame received for 1\nI0308 15:16:03.523679 604 log.go:172] (0xc00003a790) (0xc0006c9c20) Create stream\nI0308 15:16:03.523690 604 log.go:172] (0xc00003a790) (0xc0006c9c20) Stream added, broadcasting: 3\nI0308 15:16:03.524488 604 log.go:172] (0xc00003a790) Reply frame received for 3\nI0308 15:16:03.524511 604 log.go:172] (0xc00003a790) (0xc0009e00a0) Create stream\nI0308 15:16:03.524517 604 log.go:172] (0xc00003a790) (0xc0009e00a0) Stream added, broadcasting: 5\nI0308 15:16:03.525173 604 log.go:172] (0xc00003a790) Reply frame received for 5\nI0308 15:16:03.579857 604 log.go:172] (0xc00003a790) Data frame received for 3\nI0308 15:16:03.579917 604 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0308 15:16:03.579929 604 log.go:172] (0xc0006c9c20) (3) Data frame sent\nI0308 15:16:03.579937 604 log.go:172] (0xc00003a790) Data frame received for 3\nI0308 15:16:03.579943 604 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0308 15:16:03.580000 604 log.go:172] (0xc00003a790) Data frame received for 5\nI0308 15:16:03.580023 604 log.go:172] (0xc0009e00a0) (5) Data frame handling\nI0308 15:16:03.580048 604 log.go:172] (0xc0009e00a0) (5) Data frame sent\nI0308 15:16:03.580062 604 log.go:172] (0xc00003a790) Data frame received for 5\nI0308 15:16:03.580075 604 log.go:172] (0xc0009e00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:16:03.581075 604 log.go:172] (0xc00003a790) Data frame received for 1\nI0308 15:16:03.581094 604 log.go:172] (0xc0009e0000) (1) Data frame handling\nI0308 15:16:03.581102 604 log.go:172] (0xc0009e0000) (1) Data frame sent\nI0308 15:16:03.581112 604 log.go:172] (0xc00003a790) (0xc0009e0000) Stream removed, broadcasting: 1\nI0308 15:16:03.581140 604 log.go:172] (0xc00003a790) Go away received\nI0308 15:16:03.581388 604 log.go:172] (0xc00003a790) (0xc0009e0000) Stream removed, broadcasting: 1\nI0308 15:16:03.581402 604 log.go:172] (0xc00003a790) (0xc0006c9c20) Stream removed, broadcasting: 3\nI0308 15:16:03.581410 604 log.go:172] (0xc00003a790) (0xc0009e00a0) Stream removed, broadcasting: 5\n" Mar 8 15:16:03.584: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:16:03.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:16:03.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:16:04.155: INFO: stderr: "I0308 15:16:04.069976 625 log.go:172] (0xc000b0e000) (0xc0005b92c0) Create stream\nI0308 15:16:04.070043 625 log.go:172] (0xc000b0e000) (0xc0005b92c0) Stream added, broadcasting: 1\nI0308 15:16:04.073662 625 log.go:172] (0xc000b0e000) Reply frame received for 1\nI0308 15:16:04.073720 625 log.go:172] (0xc000b0e000) (0xc000902000) Create stream\nI0308 15:16:04.073738 625 log.go:172] (0xc000b0e000) (0xc000902000) Stream added, broadcasting: 3\nI0308 15:16:04.075909 625 log.go:172] (0xc000b0e000) Reply frame received for 3\nI0308 15:16:04.075954 625 log.go:172] (0xc000b0e000) (0xc0005b9360) Create stream\nI0308 15:16:04.075962 625 log.go:172] (0xc000b0e000) (0xc0005b9360) Stream added, broadcasting: 5\nI0308 15:16:04.076819 625 log.go:172] (0xc000b0e000) Reply frame received for 5\nI0308 15:16:04.141320 625 log.go:172] (0xc000b0e000) Data frame received for 5\nI0308 15:16:04.141344 625 log.go:172] (0xc0005b9360) (5) Data frame handling\nI0308 15:16:04.141358 625 log.go:172] (0xc0005b9360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:16:04.150600 625 log.go:172] (0xc000b0e000) Data frame received for 3\nI0308 15:16:04.150620 625 log.go:172] (0xc000902000) (3) Data frame handling\nI0308 15:16:04.150641 625 log.go:172] (0xc000902000) (3) Data frame sent\nI0308 15:16:04.150764 625 log.go:172] (0xc000b0e000) Data frame received for 5\nI0308 15:16:04.150788 625 log.go:172] (0xc0005b9360) (5) Data frame handling\nI0308 15:16:04.150800 625 log.go:172] (0xc000b0e000) Data frame received for 3\nI0308 15:16:04.150816 625 log.go:172] (0xc000902000) (3) Data frame handling\nI0308 15:16:04.152035 625 log.go:172] (0xc000b0e000) Data frame received for 1\nI0308 15:16:04.152051 625 log.go:172] (0xc0005b92c0) (1) Data frame handling\nI0308 15:16:04.152064 625 log.go:172] (0xc0005b92c0) (1) Data frame sent\nI0308 15:16:04.152082 625 log.go:172] (0xc000b0e000) (0xc0005b92c0) Stream removed, broadcasting: 1\nI0308 15:16:04.152271 625 log.go:172] (0xc000b0e000) Go away received\nI0308 15:16:04.152517 625 log.go:172] (0xc000b0e000) (0xc0005b92c0) Stream removed, broadcasting: 1\nI0308 15:16:04.152545 625 log.go:172] (0xc000b0e000) (0xc000902000) Stream removed, broadcasting: 3\nI0308 15:16:04.152558 625 log.go:172] (0xc000b0e000) (0xc0005b9360) Stream removed, broadcasting: 5\n" Mar 8 15:16:04.155: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:16:04.155: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:16:04.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2638 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:16:04.315: INFO: stderr: "I0308 15:16:04.266320 646 log.go:172] (0xc000a0d4a0) (0xc000a4a6e0) Create stream\nI0308 15:16:04.266363 646 log.go:172] (0xc000a0d4a0) (0xc000a4a6e0) Stream added, broadcasting: 1\nI0308 15:16:04.267957 646 log.go:172] (0xc000a0d4a0) Reply frame received for 1\nI0308 15:16:04.267990 646 log.go:172] (0xc000a0d4a0) (0xc000a4a780) Create stream\nI0308 15:16:04.267999 646 log.go:172] (0xc000a0d4a0) (0xc000a4a780) Stream added, broadcasting: 3\nI0308 15:16:04.268675 646 log.go:172] (0xc000a0d4a0) Reply frame received for 3\nI0308 15:16:04.268700 646 log.go:172] (0xc000a0d4a0) (0xc000994320) Create stream\nI0308 15:16:04.268708 646 log.go:172] (0xc000a0d4a0) (0xc000994320) Stream added, broadcasting: 5\nI0308 15:16:04.269360 646 log.go:172] (0xc000a0d4a0) Reply frame received for 5\nI0308 15:16:04.312046 646 log.go:172] (0xc000a0d4a0) Data frame received for 3\nI0308 15:16:04.312072 646 log.go:172] (0xc000a4a780) (3) Data frame handling\nI0308 15:16:04.312082 646 log.go:172] (0xc000a4a780) (3) Data frame sent\nI0308 15:16:04.312090 646 log.go:172] (0xc000a0d4a0) Data frame received for 3\nI0308 15:16:04.312097 646 log.go:172] (0xc000a4a780) (3) Data frame handling\nI0308 15:16:04.312118 646 log.go:172] (0xc000a0d4a0) Data frame received for 5\nI0308 15:16:04.312132 646 log.go:172] (0xc000994320) (5) Data frame handling\nI0308 15:16:04.312142 646 log.go:172] (0xc000994320) (5) Data frame sent\nI0308 15:16:04.312152 646 log.go:172] (0xc000a0d4a0) Data frame received for 5\nI0308 15:16:04.312160 646 log.go:172] (0xc000994320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:16:04.312430 646 log.go:172] (0xc000a0d4a0) Data frame received for 1\nI0308 15:16:04.312450 646 log.go:172] (0xc000a4a6e0) (1) Data frame handling\nI0308 15:16:04.312456 646 log.go:172] (0xc000a4a6e0) (1) Data frame sent\nI0308 15:16:04.312465 646 log.go:172] (0xc000a0d4a0) (0xc000a4a6e0) Stream removed, broadcasting: 1\nI0308 15:16:04.312473 646 log.go:172] (0xc000a0d4a0) Go away received\nI0308 15:16:04.312744 646 log.go:172] (0xc000a0d4a0) (0xc000a4a6e0) Stream removed, broadcasting: 1\nI0308 15:16:04.312760 646 log.go:172] (0xc000a0d4a0) (0xc000a4a780) Stream removed, broadcasting: 3\nI0308 15:16:04.312766 646 log.go:172] (0xc000a0d4a0) (0xc000994320) Stream removed, broadcasting: 5\n" Mar 8 15:16:04.315: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:16:04.315: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:16:04.315: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 15:16:24.355: INFO: Deleting all statefulset in ns statefulset-2638 Mar 8 15:16:24.384: INFO: Scaling statefulset ss to 0 Mar 8 15:16:24.425: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:16:24.427: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:16:24.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2638" for this suite. • [SLOW TEST:92.404 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":26,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:16:24.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-c6dfbe36-eb1a-4eb2-9f42-e39c6a32dad1 STEP: Creating a pod to test consume secrets Mar 8 15:16:24.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05" in namespace "projected-8041" to be "success or failure" Mar 8 15:16:24.580: INFO: Pod "pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05": Phase="Pending", Reason="", readiness=false. Elapsed: 21.392798ms Mar 8 15:16:26.584: INFO: Pod "pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025236518s STEP: Saw pod success Mar 8 15:16:26.584: INFO: Pod "pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05" satisfied condition "success or failure" Mar 8 15:16:26.586: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05 container projected-secret-volume-test: STEP: delete the pod Mar 8 15:16:26.617: INFO: Waiting for pod pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05 to disappear Mar 8 15:16:26.622: INFO: Pod pod-projected-secrets-f0800da1-91ec-4aa8-a77d-175412540f05 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:16:26.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8041" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":404,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:16:26.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7594 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 15:16:26.690: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 15:16:26.737: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:16:28.741: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:30.741: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:32.740: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:34.740: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:36.741: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:38.740: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:40.762: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:42.740: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:16:44.740: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 15:16:44.746: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 15:16:46.779: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.59:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7594 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:16:46.779: INFO: >>> kubeConfig: /root/.kube/config I0308 15:16:46.804160 7 log.go:172] (0xc002021550) (0xc0023f6b40) Create stream I0308 15:16:46.804183 7 log.go:172] (0xc002021550) (0xc0023f6b40) Stream added, broadcasting: 1 I0308 15:16:46.806247 7 log.go:172] (0xc002021550) Reply frame received for 1 I0308 15:16:46.806292 7 log.go:172] (0xc002021550) (0xc002905400) Create stream I0308 15:16:46.806304 7 log.go:172] (0xc002021550) (0xc002905400) Stream added, broadcasting: 3 I0308 15:16:46.807280 7 log.go:172] (0xc002021550) Reply frame received for 3 I0308 15:16:46.807314 7 log.go:172] (0xc002021550) (0xc0023f6be0) Create stream I0308 15:16:46.807328 7 log.go:172] (0xc002021550) (0xc0023f6be0) Stream added, broadcasting: 5 I0308 15:16:46.808505 7 log.go:172] (0xc002021550) Reply frame received for 5 I0308 15:16:46.859138 7 log.go:172] (0xc002021550) Data frame received for 3 I0308 15:16:46.859165 7 log.go:172] (0xc002905400) (3) Data frame handling I0308 15:16:46.859175 7 log.go:172] (0xc002905400) (3) Data frame sent I0308 15:16:46.859183 7 log.go:172] (0xc002021550) Data frame received for 3 I0308 15:16:46.859190 7 log.go:172] (0xc002905400) (3) Data frame handling I0308 15:16:46.859235 7 log.go:172] (0xc002021550) Data frame received for 5 I0308 15:16:46.859274 7 log.go:172] (0xc0023f6be0) (5) Data frame handling I0308 15:16:46.860597 7 log.go:172] (0xc002021550) Data frame received for 1 I0308 15:16:46.860619 7 log.go:172] (0xc0023f6b40) (1) Data frame handling I0308 15:16:46.860639 7 log.go:172] (0xc0023f6b40) (1) Data frame sent I0308 15:16:46.860736 7 log.go:172] (0xc002021550) (0xc0023f6b40) Stream removed, broadcasting: 1 I0308 15:16:46.860843 7 log.go:172] (0xc002021550) (0xc0023f6b40) Stream removed, broadcasting: 1 I0308 15:16:46.860865 7 log.go:172] (0xc002021550) (0xc002905400) Stream removed, broadcasting: 3 I0308 15:16:46.860875 7 log.go:172] (0xc002021550) (0xc0023f6be0) Stream removed, broadcasting: 5 Mar 8 15:16:46.860: INFO: Found all expected endpoints: [netserver-0] I0308 15:16:46.860949 7 log.go:172] (0xc002021550) Go away received Mar 8 15:16:46.863: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.19:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7594 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:16:46.863: INFO: >>> kubeConfig: /root/.kube/config I0308 15:16:46.888455 7 log.go:172] (0xc001fd24d0) (0xc0029745a0) Create stream I0308 15:16:46.888486 7 log.go:172] (0xc001fd24d0) (0xc0029745a0) Stream added, broadcasting: 1 I0308 15:16:46.891540 7 log.go:172] (0xc001fd24d0) Reply frame received for 1 I0308 15:16:46.891585 7 log.go:172] (0xc001fd24d0) (0xc001a13f40) Create stream I0308 15:16:46.891598 7 log.go:172] (0xc001fd24d0) (0xc001a13f40) Stream added, broadcasting: 3 I0308 15:16:46.893090 7 log.go:172] (0xc001fd24d0) Reply frame received for 3 I0308 15:16:46.893124 7 log.go:172] (0xc001fd24d0) (0xc0027b2000) Create stream I0308 15:16:46.893140 7 log.go:172] (0xc001fd24d0) (0xc0027b2000) Stream added, broadcasting: 5 I0308 15:16:46.894097 7 log.go:172] (0xc001fd24d0) Reply frame received for 5 I0308 15:16:46.962886 7 log.go:172] (0xc001fd24d0) Data frame received for 3 I0308 15:16:46.962924 7 log.go:172] (0xc001a13f40) (3) Data frame handling I0308 15:16:46.962945 7 log.go:172] (0xc001a13f40) (3) Data frame sent I0308 15:16:46.963101 7 log.go:172] (0xc001fd24d0) Data frame received for 3 I0308 15:16:46.963122 7 log.go:172] (0xc001a13f40) (3) Data frame handling I0308 15:16:46.963139 7 log.go:172] (0xc001fd24d0) Data frame received for 5 I0308 15:16:46.963148 7 log.go:172] (0xc0027b2000) (5) Data frame handling I0308 15:16:46.964290 7 log.go:172] (0xc001fd24d0) Data frame received for 1 I0308 15:16:46.964322 7 log.go:172] (0xc0029745a0) (1) Data frame handling I0308 15:16:46.964343 7 log.go:172] (0xc0029745a0) (1) Data frame sent I0308 15:16:46.964359 7 log.go:172] (0xc001fd24d0) (0xc0029745a0) Stream removed, broadcasting: 1 I0308 15:16:46.964376 7 log.go:172] (0xc001fd24d0) Go away received I0308 15:16:46.964479 7 log.go:172] (0xc001fd24d0) (0xc0029745a0) Stream removed, broadcasting: 1 I0308 15:16:46.964492 7 log.go:172] (0xc001fd24d0) (0xc001a13f40) Stream removed, broadcasting: 3 I0308 15:16:46.964502 7 log.go:172] (0xc001fd24d0) (0xc0027b2000) Stream removed, broadcasting: 5 Mar 8 15:16:46.964: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:16:46.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7594" for this suite. • [SLOW TEST:20.340 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":28,"skipped":405,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:16:46.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:16:47.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:16:49.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277407, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277407, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277407, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277407, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:16:52.545: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:16:52.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8144" for this suite. STEP: Destroying namespace "webhook-8144-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.706 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":29,"skipped":414,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:16:52.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0308 15:16:53.417600 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:16:53.417: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:16:53.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7658" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":30,"skipped":421,"failed":0} ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:16:53.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:16:53.544: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 15:16:58.583: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 15:16:58.583: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 15:17:02.684: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7101 /apis/apps/v1/namespaces/deployment-7101/deployments/test-cleanup-deployment 74b88808-4132-4cd6-b89a-c0176d0415be 10069 1 2020-03-08 15:16:58 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a50188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 15:16:58 +0000 UTC,LastTransitionTime:2020-03-08 15:16:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-08 15:17:00 +0000 UTC,LastTransitionTime:2020-03-08 15:16:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 15:17:02.687: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7101 /apis/apps/v1/namespaces/deployment-7101/replicasets/test-cleanup-deployment-55ffc6b7b6 44ac1fa3-92f7-4a9d-820e-2954eb1ea1dd 10058 1 2020-03-08 15:16:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 74b88808-4132-4cd6-b89a-c0176d0415be 0xc002a50547 0xc002a50548}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a505b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:17:02.689: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-wgtm9" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-wgtm9 test-cleanup-deployment-55ffc6b7b6- deployment-7101 /api/v1/namespaces/deployment-7101/pods/test-cleanup-deployment-55ffc6b7b6-wgtm9 c3a2f418-1130-4459-9b96-81c2a9ed6df9 10057 0 2020-03-08 15:16:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 44ac1fa3-92f7-4a9d-820e-2954eb1ea1dd 0xc002baf4c7 0xc002baf4c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kxgfc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kxgfc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kxgfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:16:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:16:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.21,StartTime:2020-03-08 15:16:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 15:17:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://60d794b51efbb7bec25e0417d68e32d2d0762aaf1a5acfd71ca685359b2bcdf7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:02.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7101" for this suite. • [SLOW TEST:9.269 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":31,"skipped":421,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:02.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:17:03.222: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:17:05.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277423, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277423, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277423, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277423, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:17:08.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5632" for this suite. STEP: Destroying namespace "webhook-5632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.068 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":32,"skipped":426,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:08.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-7f1e933b-656c-4a2d-b2ec-5721a5a879b7 STEP: Creating a pod to test consume secrets Mar 8 15:17:08.832: INFO: Waiting up to 5m0s for pod "pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2" in namespace "secrets-6202" to be "success or failure" Mar 8 15:17:08.843: INFO: Pod "pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656681ms Mar 8 15:17:10.847: INFO: Pod "pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014924252s STEP: Saw pod success Mar 8 15:17:10.847: INFO: Pod "pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2" satisfied condition "success or failure" Mar 8 15:17:10.850: INFO: Trying to get logs from node latest-worker pod pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2 container secret-volume-test: STEP: delete the pod Mar 8 15:17:10.888: INFO: Waiting for pod pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2 to disappear Mar 8 15:17:10.892: INFO: Pod pod-secrets-43994418-67fe-4016-bcc3-4525833f7aa2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6202" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":438,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:10.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 8 15:17:10.991: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:22.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8571" for this suite. • [SLOW TEST:11.590 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":34,"skipped":447,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:22.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:28.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1287" for this suite. • [SLOW TEST:5.765 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":35,"skipped":448,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:28.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:17:28.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38" in namespace "downward-api-9176" to be "success or failure" Mar 8 15:17:28.318: INFO: Pod "downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38": Phase="Pending", Reason="", readiness=false. Elapsed: 13.655483ms Mar 8 15:17:30.337: INFO: Pod "downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03242453s Mar 8 15:17:32.340: INFO: Pod "downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035644577s STEP: Saw pod success Mar 8 15:17:32.340: INFO: Pod "downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38" satisfied condition "success or failure" Mar 8 15:17:32.343: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38 container client-container: STEP: delete the pod Mar 8 15:17:32.362: INFO: Waiting for pod downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38 to disappear Mar 8 15:17:32.378: INFO: Pod downwardapi-volume-d64a77b1-fd52-4411-a5ac-65b9a778ac38 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:32.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9176" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":451,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:32.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:17:32.972: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:17:34.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277452, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277452, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277453, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277452, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:17:38.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:38.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1353" for this suite. STEP: Destroying namespace "webhook-1353-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.823 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":37,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:38.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:42.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4753" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":38,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:42.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 8 15:17:42.531: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:46.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7677" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":39,"skipped":531,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:46.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:17:47.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:17:50.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:17:50.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4012-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:51.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7202" for this suite. STEP: Destroying namespace "webhook-7202-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.270 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":40,"skipped":540,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:51.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-b2802a27-1f58-4058-a898-ce0c66270306 STEP: Creating a pod to test consume secrets Mar 8 15:17:51.835: INFO: Waiting up to 5m0s for pod "pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23" in namespace "secrets-3512" to be "success or failure" Mar 8 15:17:51.844: INFO: Pod "pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838824ms Mar 8 15:17:53.847: INFO: Pod "pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012246141s STEP: Saw pod success Mar 8 15:17:53.847: INFO: Pod "pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23" satisfied condition "success or failure" Mar 8 15:17:53.850: INFO: Trying to get logs from node latest-worker pod pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23 container secret-env-test: STEP: delete the pod Mar 8 15:17:53.880: INFO: Waiting for pod pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23 to disappear Mar 8 15:17:53.912: INFO: Pod pod-secrets-6329db27-3e6b-4dfb-bcdd-9133fd4cbc23 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:53.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3512" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":544,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:53.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-3e2d6f3f-eaa8-485a-9182-4e7c2f9688cd [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:17:53.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8460" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":42,"skipped":548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:17:53.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 15:17:54.076: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:54.080: INFO: Number of nodes with available pods: 0 Mar 8 15:17:54.080: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:17:55.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:55.089: INFO: Number of nodes with available pods: 0 Mar 8 15:17:55.089: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:17:56.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:56.088: INFO: Number of nodes with available pods: 2 Mar 8 15:17:56.088: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 15:17:56.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:56.122: INFO: Number of nodes with available pods: 1 Mar 8 15:17:56.122: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:17:57.126: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:57.128: INFO: Number of nodes with available pods: 1 Mar 8 15:17:57.128: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:17:58.134: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:58.136: INFO: Number of nodes with available pods: 1 Mar 8 15:17:58.136: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:17:59.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:17:59.132: INFO: Number of nodes with available pods: 1 Mar 8 15:17:59.132: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:00.160: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:00.164: INFO: Number of nodes with available pods: 1 Mar 8 15:18:00.164: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:01.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:01.130: INFO: Number of nodes with available pods: 1 Mar 8 15:18:01.130: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:02.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:02.130: INFO: Number of nodes with available pods: 1 Mar 8 15:18:02.130: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:03.125: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:03.127: INFO: Number of nodes with available pods: 1 Mar 8 15:18:03.127: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:04.126: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:04.152: INFO: Number of nodes with available pods: 1 Mar 8 15:18:04.152: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:18:05.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:18:05.129: INFO: Number of nodes with available pods: 2 Mar 8 15:18:05.129: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2637, will wait for the garbage collector to delete the pods Mar 8 15:18:05.190: INFO: Deleting DaemonSet.extensions daemon-set took: 5.88406ms Mar 8 15:18:05.490: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.250892ms Mar 8 15:18:12.093: INFO: Number of nodes with available pods: 0 Mar 8 15:18:12.093: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:18:12.158: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2637/daemonsets","resourceVersion":"10883"},"items":null} Mar 8 15:18:12.162: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2637/pods","resourceVersion":"10883"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:12.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2637" for this suite. • [SLOW TEST:18.199 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":43,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:12.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:12.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-71" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":44,"skipped":637,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:12.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-1792 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 15:18:12.362: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 15:18:12.399: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:18:14.401: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:18:16.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:18:18.410: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:18:20.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:18:22.409: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:18:24.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:18:26.402: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 15:18:26.407: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 15:18:28.498: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostname&protocol=http&host=10.244.1.76&port=8080&tries=1'] Namespace:pod-network-test-1792 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:18:28.498: INFO: >>> kubeConfig: /root/.kube/config I0308 15:18:28.530367 7 log.go:172] (0xc0020208f0) (0xc001b548c0) Create stream I0308 15:18:28.530392 7 log.go:172] (0xc0020208f0) (0xc001b548c0) Stream added, broadcasting: 1 I0308 15:18:28.532163 7 log.go:172] (0xc0020208f0) Reply frame received for 1 I0308 15:18:28.532199 7 log.go:172] (0xc0020208f0) (0xc0017ba320) Create stream I0308 15:18:28.532212 7 log.go:172] (0xc0020208f0) (0xc0017ba320) Stream added, broadcasting: 3 I0308 15:18:28.533222 7 log.go:172] (0xc0020208f0) Reply frame received for 3 I0308 15:18:28.533256 7 log.go:172] (0xc0020208f0) (0xc001f4c000) Create stream I0308 15:18:28.533271 7 log.go:172] (0xc0020208f0) (0xc001f4c000) Stream added, broadcasting: 5 I0308 15:18:28.534600 7 log.go:172] (0xc0020208f0) Reply frame received for 5 I0308 15:18:28.612200 7 log.go:172] (0xc0020208f0) Data frame received for 3 I0308 15:18:28.612229 7 log.go:172] (0xc0017ba320) (3) Data frame handling I0308 15:18:28.612247 7 log.go:172] (0xc0017ba320) (3) Data frame sent I0308 15:18:28.612472 7 log.go:172] (0xc0020208f0) Data frame received for 5 I0308 15:18:28.612500 7 log.go:172] (0xc001f4c000) (5) Data frame handling I0308 15:18:28.612908 7 log.go:172] (0xc0020208f0) Data frame received for 3 I0308 15:18:28.612928 7 log.go:172] (0xc0017ba320) (3) Data frame handling I0308 15:18:28.614538 7 log.go:172] (0xc0020208f0) Data frame received for 1 I0308 15:18:28.614563 7 log.go:172] (0xc001b548c0) (1) Data frame handling I0308 15:18:28.614578 7 log.go:172] (0xc001b548c0) (1) Data frame sent I0308 15:18:28.614599 7 log.go:172] (0xc0020208f0) (0xc001b548c0) Stream removed, broadcasting: 1 I0308 15:18:28.614710 7 log.go:172] (0xc0020208f0) (0xc001b548c0) Stream removed, broadcasting: 1 I0308 15:18:28.614734 7 log.go:172] (0xc0020208f0) (0xc0017ba320) Stream removed, broadcasting: 3 I0308 15:18:28.614754 7 log.go:172] (0xc0020208f0) (0xc001f4c000) Stream removed, broadcasting: 5 Mar 8 15:18:28.614: INFO: Waiting for responses: map[] I0308 15:18:28.614935 7 log.go:172] (0xc0020208f0) Go away received Mar 8 15:18:28.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostname&protocol=http&host=10.244.2.23&port=8080&tries=1'] Namespace:pod-network-test-1792 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:18:28.618: INFO: >>> kubeConfig: /root/.kube/config I0308 15:18:28.647003 7 log.go:172] (0xc002844370) (0xc001f4c3c0) Create stream I0308 15:18:28.647033 7 log.go:172] (0xc002844370) (0xc001f4c3c0) Stream added, broadcasting: 1 I0308 15:18:28.650562 7 log.go:172] (0xc002844370) Reply frame received for 1 I0308 15:18:28.650618 7 log.go:172] (0xc002844370) (0xc0017ba3c0) Create stream I0308 15:18:28.650638 7 log.go:172] (0xc002844370) (0xc0017ba3c0) Stream added, broadcasting: 3 I0308 15:18:28.651922 7 log.go:172] (0xc002844370) Reply frame received for 3 I0308 15:18:28.651963 7 log.go:172] (0xc002844370) (0xc001b54be0) Create stream I0308 15:18:28.651974 7 log.go:172] (0xc002844370) (0xc001b54be0) Stream added, broadcasting: 5 I0308 15:18:28.652799 7 log.go:172] (0xc002844370) Reply frame received for 5 I0308 15:18:28.719866 7 log.go:172] (0xc002844370) Data frame received for 3 I0308 15:18:28.719894 7 log.go:172] (0xc0017ba3c0) (3) Data frame handling I0308 15:18:28.719920 7 log.go:172] (0xc0017ba3c0) (3) Data frame sent I0308 15:18:28.720163 7 log.go:172] (0xc002844370) Data frame received for 3 I0308 15:18:28.720181 7 log.go:172] (0xc0017ba3c0) (3) Data frame handling I0308 15:18:28.720375 7 log.go:172] (0xc002844370) Data frame received for 5 I0308 15:18:28.720391 7 log.go:172] (0xc001b54be0) (5) Data frame handling I0308 15:18:28.722079 7 log.go:172] (0xc002844370) Data frame received for 1 I0308 15:18:28.722095 7 log.go:172] (0xc001f4c3c0) (1) Data frame handling I0308 15:18:28.722105 7 log.go:172] (0xc001f4c3c0) (1) Data frame sent I0308 15:18:28.722159 7 log.go:172] (0xc002844370) (0xc001f4c3c0) Stream removed, broadcasting: 1 I0308 15:18:28.722180 7 log.go:172] (0xc002844370) Go away received I0308 15:18:28.722323 7 log.go:172] (0xc002844370) (0xc001f4c3c0) Stream removed, broadcasting: 1 I0308 15:18:28.722348 7 log.go:172] (0xc002844370) (0xc0017ba3c0) Stream removed, broadcasting: 3 I0308 15:18:28.722361 7 log.go:172] (0xc002844370) (0xc001b54be0) Stream removed, broadcasting: 5 Mar 8 15:18:28.722: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:28.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1792" for this suite. • [SLOW TEST:16.415 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":45,"skipped":654,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:28.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 15:18:28.835: INFO: Waiting up to 5m0s for pod "pod-c3bd6853-c831-4bb6-b473-e4741d827c9d" in namespace "emptydir-2910" to be "success or failure" Mar 8 15:18:28.855: INFO: Pod "pod-c3bd6853-c831-4bb6-b473-e4741d827c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048876ms Mar 8 15:18:30.859: INFO: Pod "pod-c3bd6853-c831-4bb6-b473-e4741d827c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023789806s Mar 8 15:18:32.863: INFO: Pod "pod-c3bd6853-c831-4bb6-b473-e4741d827c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027875384s STEP: Saw pod success Mar 8 15:18:32.863: INFO: Pod "pod-c3bd6853-c831-4bb6-b473-e4741d827c9d" satisfied condition "success or failure" Mar 8 15:18:32.866: INFO: Trying to get logs from node latest-worker2 pod pod-c3bd6853-c831-4bb6-b473-e4741d827c9d container test-container: STEP: delete the pod Mar 8 15:18:32.908: INFO: Waiting for pod pod-c3bd6853-c831-4bb6-b473-e4741d827c9d to disappear Mar 8 15:18:32.912: INFO: Pod pod-c3bd6853-c831-4bb6-b473-e4741d827c9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:32.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2910" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":46,"skipped":662,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:32.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 8 15:18:33.001: INFO: >>> kubeConfig: /root/.kube/config Mar 8 15:18:35.863: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:47.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1368" for this suite. • [SLOW TEST:14.091 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":47,"skipped":682,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:47.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-7084fad1-044d-42cd-b4fe-b721e7faf766 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:51.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2921" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":48,"skipped":691,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:51.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:51.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-746" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":49,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:51.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Mar 8 15:18:51.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1928 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 8 15:18:51.485: INFO: stderr: "" Mar 8 15:18:51.485: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Mar 8 15:18:51.485: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 8 15:18:51.485: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1928" to be "running and ready, or succeeded" Mar 8 15:18:51.492: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228759ms Mar 8 15:18:53.495: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.010124164s Mar 8 15:18:53.496: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 8 15:18:53.496: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 8 15:18:53.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928' Mar 8 15:18:53.623: INFO: stderr: "" Mar 8 15:18:53.623: INFO: stdout: "I0308 15:18:52.579456 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vk9 484\nI0308 15:18:52.779565 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/vdbw 355\nI0308 15:18:52.979707 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/jp8f 240\nI0308 15:18:53.179624 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/cl5s 444\nI0308 15:18:53.379614 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/kwm 555\nI0308 15:18:53.579585 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/pmv 562\n" STEP: limiting log lines Mar 8 15:18:53.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928 --tail=1' Mar 8 15:18:53.735: INFO: stderr: "" Mar 8 15:18:53.735: INFO: stdout: "I0308 15:18:53.579585 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/pmv 562\n" Mar 8 15:18:53.735: INFO: got output "I0308 15:18:53.579585 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/pmv 562\n" STEP: limiting log bytes Mar 8 15:18:53.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928 --limit-bytes=1' Mar 8 15:18:53.822: INFO: stderr: "" Mar 8 15:18:53.822: INFO: stdout: "I" Mar 8 15:18:53.822: INFO: got output "I" STEP: exposing timestamps Mar 8 15:18:53.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928 --tail=1 --timestamps' Mar 8 15:18:53.910: INFO: stderr: "" Mar 8 15:18:53.910: INFO: stdout: "2020-03-08T15:18:53.779701597Z I0308 15:18:53.779568 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pr6 407\n" Mar 8 15:18:53.910: INFO: got output "2020-03-08T15:18:53.779701597Z I0308 15:18:53.779568 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pr6 407\n" STEP: restricting to a time range Mar 8 15:18:56.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928 --since=1s' Mar 8 15:18:56.501: INFO: stderr: "" Mar 8 15:18:56.501: INFO: stdout: "I0308 15:18:55.579610 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/xn2 398\nI0308 15:18:55.779616 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/lqq 224\nI0308 15:18:55.979704 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/7jr 234\nI0308 15:18:56.179614 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/p4dc 577\nI0308 15:18:56.379577 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/m28c 456\n" Mar 8 15:18:56.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1928 --since=24h' Mar 8 15:18:56.640: INFO: stderr: "" Mar 8 15:18:56.640: INFO: stdout: "I0308 15:18:52.579456 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vk9 484\nI0308 15:18:52.779565 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/vdbw 355\nI0308 15:18:52.979707 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/jp8f 240\nI0308 15:18:53.179624 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/cl5s 444\nI0308 15:18:53.379614 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/kwm 555\nI0308 15:18:53.579585 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/pmv 562\nI0308 15:18:53.779568 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pr6 407\nI0308 15:18:53.979562 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/rnfw 201\nI0308 15:18:54.179598 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/f9k 443\nI0308 15:18:54.379583 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/wkcv 439\nI0308 15:18:54.579607 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/sdn 417\nI0308 15:18:54.779610 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/7sx 265\nI0308 15:18:54.979595 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/2pg 558\nI0308 15:18:55.179646 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/jbhz 487\nI0308 15:18:55.379605 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/lqc 389\nI0308 15:18:55.579610 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/xn2 398\nI0308 15:18:55.779616 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/lqq 224\nI0308 15:18:55.979704 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/7jr 234\nI0308 15:18:56.179614 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/p4dc 577\nI0308 15:18:56.379577 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/m28c 456\nI0308 15:18:56.579555 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/86z 529\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Mar 8 15:18:56.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1928' Mar 8 15:18:58.848: INFO: stderr: "" Mar 8 15:18:58.848: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:18:58.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1928" for this suite. • [SLOW TEST:7.553 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":50,"skipped":742,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:18:58.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Mar 8 15:18:58.974: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Mar 8 15:18:59.546: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 8 15:19:01.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:03.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:05.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:07.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:09.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:11.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:14.368: INFO: Waited 634.373947ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:14.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7488" for this suite. • [SLOW TEST:16.054 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":51,"skipped":748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:14.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-f5c1bc82-81a1-4b21-9f63-dfb1b6aee50c STEP: Creating a pod to test consume secrets Mar 8 15:19:15.026: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245" in namespace "projected-9152" to be "success or failure" Mar 8 15:19:15.029: INFO: Pod "pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245": Phase="Pending", Reason="", readiness=false. Elapsed: 3.142214ms Mar 8 15:19:17.034: INFO: Pod "pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007591633s STEP: Saw pod success Mar 8 15:19:17.034: INFO: Pod "pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245" satisfied condition "success or failure" Mar 8 15:19:17.037: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245 container projected-secret-volume-test: STEP: delete the pod Mar 8 15:19:17.095: INFO: Waiting for pod pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245 to disappear Mar 8 15:19:17.107: INFO: Pod pod-projected-secrets-c2cfe975-31be-4e6f-894a-8afd08b49245 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:17.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9152" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:17.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:19:17.245: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:19.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3698" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":801,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:19.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 8 15:19:19.510: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 8 15:19:29.865: INFO: >>> kubeConfig: /root/.kube/config Mar 8 15:19:32.669: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:43.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9559" for this suite. • [SLOW TEST:24.472 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":54,"skipped":805,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:43.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 15:19:44.522449 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:19:44.522: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:44.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3732" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":55,"skipped":826,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:44.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:19:45.672: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:19:48.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277585, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277585, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277585, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277585, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:19:51.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:19:51.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6191-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:52.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3830" for this suite. STEP: Destroying namespace "webhook-3830-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.877 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":56,"skipped":837,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:52.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 15:19:52.537: INFO: Waiting up to 5m0s for pod "pod-df937d7d-9f09-4113-aa54-640a24115798" in namespace "emptydir-3012" to be "success or failure" Mar 8 15:19:52.539: INFO: Pod "pod-df937d7d-9f09-4113-aa54-640a24115798": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108383ms Mar 8 15:19:54.543: INFO: Pod "pod-df937d7d-9f09-4113-aa54-640a24115798": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005965795s Mar 8 15:19:56.546: INFO: Pod "pod-df937d7d-9f09-4113-aa54-640a24115798": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009515749s STEP: Saw pod success Mar 8 15:19:56.546: INFO: Pod "pod-df937d7d-9f09-4113-aa54-640a24115798" satisfied condition "success or failure" Mar 8 15:19:56.549: INFO: Trying to get logs from node latest-worker pod pod-df937d7d-9f09-4113-aa54-640a24115798 container test-container: STEP: delete the pod Mar 8 15:19:56.577: INFO: Waiting for pod pod-df937d7d-9f09-4113-aa54-640a24115798 to disappear Mar 8 15:19:56.582: INFO: Pod pod-df937d7d-9f09-4113-aa54-640a24115798 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:56.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3012" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":841,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:56.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 8 15:19:59.678: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:19:59.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6824" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":58,"skipped":879,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:19:59.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:20:00.379: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 8 15:20:00.393: INFO: Number of nodes with available pods: 0 Mar 8 15:20:00.393: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 8 15:20:00.477: INFO: Number of nodes with available pods: 0 Mar 8 15:20:00.477: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:01.480: INFO: Number of nodes with available pods: 0 Mar 8 15:20:01.480: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:02.480: INFO: Number of nodes with available pods: 1 Mar 8 15:20:02.480: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 8 15:20:02.526: INFO: Number of nodes with available pods: 1 Mar 8 15:20:02.526: INFO: Number of running nodes: 0, number of available pods: 1 Mar 8 15:20:03.530: INFO: Number of nodes with available pods: 0 Mar 8 15:20:03.530: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 8 15:20:03.543: INFO: Number of nodes with available pods: 0 Mar 8 15:20:03.543: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:04.545: INFO: Number of nodes with available pods: 0 Mar 8 15:20:04.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:05.548: INFO: Number of nodes with available pods: 0 Mar 8 15:20:05.549: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:06.546: INFO: Number of nodes with available pods: 0 Mar 8 15:20:06.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:07.546: INFO: Number of nodes with available pods: 0 Mar 8 15:20:07.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:08.558: INFO: Number of nodes with available pods: 0 Mar 8 15:20:08.558: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:09.546: INFO: Number of nodes with available pods: 0 Mar 8 15:20:09.547: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:10.555: INFO: Number of nodes with available pods: 0 Mar 8 15:20:10.555: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:11.546: INFO: Number of nodes with available pods: 0 Mar 8 15:20:11.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:12.546: INFO: Number of nodes with available pods: 0 Mar 8 15:20:12.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:13.547: INFO: Number of nodes with available pods: 0 Mar 8 15:20:13.547: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 15:20:14.546: INFO: Number of nodes with available pods: 1 Mar 8 15:20:14.546: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6961, will wait for the garbage collector to delete the pods Mar 8 15:20:14.640: INFO: Deleting DaemonSet.extensions daemon-set took: 36.103996ms Mar 8 15:20:14.740: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.22632ms Mar 8 15:20:22.143: INFO: Number of nodes with available pods: 0 Mar 8 15:20:22.143: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:20:22.145: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6961/daemonsets","resourceVersion":"11891"},"items":null} Mar 8 15:20:22.147: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6961/pods","resourceVersion":"11891"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:20:22.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6961" for this suite. • [SLOW TEST:22.424 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":59,"skipped":886,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:20:22.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2879.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.160.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.160.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.160.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.160.59_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2879.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.160.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.160.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.160.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.160.59_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 15:20:36.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.495: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.499: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.502: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.527: INFO: Unable to read jessie_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.530: INFO: Unable to read jessie_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.533: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.542: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:36.560: INFO: Lookups using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 failed for: [wheezy_udp@dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_udp@dns-test-service.dns-2879.svc.cluster.local jessie_tcp@dns-test-service.dns-2879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local] Mar 8 15:20:41.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.592: INFO: Unable to read jessie_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:41.613: INFO: Lookups using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 failed for: [wheezy_udp@dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_udp@dns-test-service.dns-2879.svc.cluster.local jessie_tcp@dns-test-service.dns-2879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local] Mar 8 15:20:46.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.570: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.573: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.593: INFO: Unable to read jessie_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.599: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:46.616: INFO: Lookups using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 failed for: [wheezy_udp@dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_udp@dns-test-service.dns-2879.svc.cluster.local jessie_tcp@dns-test-service.dns-2879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local] Mar 8 15:20:51.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.647: INFO: Unable to read jessie_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.650: INFO: Unable to read jessie_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.652: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:51.718: INFO: Lookups using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 failed for: [wheezy_udp@dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_udp@dns-test-service.dns-2879.svc.cluster.local jessie_tcp@dns-test-service.dns-2879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local] Mar 8 15:20:56.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.593: INFO: Unable to read jessie_udp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local from pod dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850: the server could not find the requested resource (get pods dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850) Mar 8 15:20:56.631: INFO: Lookups using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 failed for: [wheezy_udp@dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@dns-test-service.dns-2879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_udp@dns-test-service.dns-2879.svc.cluster.local jessie_tcp@dns-test-service.dns-2879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2879.svc.cluster.local] Mar 8 15:21:01.639: INFO: DNS probes using dns-2879/dns-test-ddee2592-5c0f-4dfe-926b-3ff00c241850 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:01.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2879" for this suite. • [SLOW TEST:39.742 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":280,"completed":60,"skipped":892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:01.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 15:21:08.064: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:21:08.116: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:21:10.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:21:10.120: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:21:12.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:21:12.119: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:12.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7701" for this suite. • [SLOW TEST:10.187 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":61,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:12.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:21:12.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094" in namespace "downward-api-5262" to be "success or failure" Mar 8 15:21:12.263: INFO: Pod "downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624802ms Mar 8 15:21:14.266: INFO: Pod "downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009511536s STEP: Saw pod success Mar 8 15:21:14.266: INFO: Pod "downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094" satisfied condition "success or failure" Mar 8 15:21:14.268: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094 container client-container: STEP: delete the pod Mar 8 15:21:14.291: INFO: Waiting for pod downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094 to disappear Mar 8 15:21:14.346: INFO: Pod downwardapi-volume-2a8522c0-6ec2-41c5-9536-bceff4e66094 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:14.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5262" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":62,"skipped":955,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:14.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 8 15:21:14.435: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9223" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":63,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:17.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 8 15:21:17.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7085' Mar 8 15:21:20.497: INFO: stderr: "" Mar 8 15:21:20.498: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 15:21:21.501: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 15:21:21.501: INFO: Found 0 / 1 Mar 8 15:21:22.500: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 15:21:22.500: INFO: Found 1 / 1 Mar 8 15:21:22.500: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 15:21:22.502: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 15:21:22.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 15:21:22.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-rbd4g --namespace=kubectl-7085 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 15:21:22.615: INFO: stderr: "" Mar 8 15:21:22.615: INFO: stdout: "pod/agnhost-master-rbd4g patched\n" STEP: checking annotations Mar 8 15:21:22.634: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 15:21:22.634: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:22.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7085" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":280,"completed":64,"skipped":998,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:22.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:21:23.152: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:21:25.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:21:28.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:21:28.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:29.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-918" for this suite. STEP: Destroying namespace "webhook-918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.906 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":65,"skipped":1009,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:29.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 15:21:29.675: INFO: Waiting up to 5m0s for pod "pod-3f30f97d-f654-4208-a879-52a10370dd01" in namespace "emptydir-7939" to be "success or failure" Mar 8 15:21:29.700: INFO: Pod "pod-3f30f97d-f654-4208-a879-52a10370dd01": Phase="Pending", Reason="", readiness=false. Elapsed: 24.94338ms Mar 8 15:21:31.703: INFO: Pod "pod-3f30f97d-f654-4208-a879-52a10370dd01": Phase="Running", Reason="", readiness=true. Elapsed: 2.028459108s Mar 8 15:21:33.708: INFO: Pod "pod-3f30f97d-f654-4208-a879-52a10370dd01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033095949s STEP: Saw pod success Mar 8 15:21:33.708: INFO: Pod "pod-3f30f97d-f654-4208-a879-52a10370dd01" satisfied condition "success or failure" Mar 8 15:21:33.711: INFO: Trying to get logs from node latest-worker pod pod-3f30f97d-f654-4208-a879-52a10370dd01 container test-container: STEP: delete the pod Mar 8 15:21:33.726: INFO: Waiting for pod pod-3f30f97d-f654-4208-a879-52a10370dd01 to disappear Mar 8 15:21:33.730: INFO: Pod pod-3f30f97d-f654-4208-a879-52a10370dd01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:33.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7939" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":1014,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:33.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:44.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6266" for this suite. • [SLOW TEST:11.172 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":67,"skipped":1028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:44.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 15:21:48.346: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:48.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4255" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":1077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:48.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 15:21:50.986: INFO: Successfully updated pod "pod-update-35e0bf3f-0fbe-4634-9099-a41bfcf82487" STEP: verifying the updated pod is in kubernetes Mar 8 15:21:51.094: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5895" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":69,"skipped":1122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:51.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Mar 8 15:21:51.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config cluster-info' Mar 8 15:21:51.320: INFO: stderr: "" Mar 8 15:21:51.320: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:21:51.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3303" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":280,"completed":70,"skipped":1150,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:21:51.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2c140612-8ee0-46fd-bf0b-576ee17e8fec in namespace container-probe-1085 Mar 8 15:21:53.544: INFO: Started pod liveness-2c140612-8ee0-46fd-bf0b-576ee17e8fec in namespace container-probe-1085 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:21:53.546: INFO: Initial restart count of pod liveness-2c140612-8ee0-46fd-bf0b-576ee17e8fec is 0 Mar 8 15:22:17.610: INFO: Restart count of pod container-probe-1085/liveness-2c140612-8ee0-46fd-bf0b-576ee17e8fec is now 1 (24.064117835s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:22:17.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1085" for this suite. • [SLOW TEST:26.297 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":71,"skipped":1158,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:22:17.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:22:41.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2016" for this suite. • [SLOW TEST:23.532 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:22:41.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:22:41.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7" in namespace "projected-9517" to be "success or failure" Mar 8 15:22:41.385: INFO: Pod "downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.481632ms Mar 8 15:22:43.388: INFO: Pod "downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049662415s STEP: Saw pod success Mar 8 15:22:43.388: INFO: Pod "downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7" satisfied condition "success or failure" Mar 8 15:22:43.391: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7 container client-container: STEP: delete the pod Mar 8 15:22:43.449: INFO: Waiting for pod downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7 to disappear Mar 8 15:22:43.457: INFO: Pod downwardapi-volume-6b056c20-0bfd-4292-901b-5b94b8fc12f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:22:43.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9517" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":1197,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:22:43.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 15:22:51.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:22:51.821: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:22:53.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:22:53.823: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:22:55.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:22:55.851: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:22:57.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:22:57.856: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:22:59.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:22:59.825: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:23:01.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:23:01.825: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:23:03.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:23:03.825: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:03.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7648" for this suite. • [SLOW TEST:20.364 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1201,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:03.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:23:03.904: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 15:23:08.923: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 15:23:08.923: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 15:23:10.927: INFO: Creating deployment "test-rollover-deployment" Mar 8 15:23:10.964: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 15:23:12.970: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 15:23:12.974: INFO: Ensure that both replica sets have 1 created replica Mar 8 15:23:12.978: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 15:23:12.983: INFO: Updating deployment test-rollover-deployment Mar 8 15:23:12.983: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 15:23:15.308: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 15:23:15.319: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 15:23:15.456: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:15.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277793, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:17.464: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:17.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277795, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:19.465: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:19.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277795, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:21.463: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:21.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277795, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:23.463: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:23.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277795, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:25.463: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:23:25.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277791, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277795, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277790, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:23:27.462: INFO: Mar 8 15:23:27.462: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 15:23:27.469: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4758 /apis/apps/v1/namespaces/deployment-4758/deployments/test-rollover-deployment cf2da14d-06d0-4c93-ad9e-6cfee4a7f278 13064 2 2020-03-08 15:23:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002956d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 15:23:11 +0000 UTC,LastTransitionTime:2020-03-08 15:23:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-08 15:23:25 +0000 UTC,LastTransitionTime:2020-03-08 15:23:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 15:23:27.471: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4758 /apis/apps/v1/namespaces/deployment-4758/replicasets/test-rollover-deployment-574d6dfbff 3c4e967c-40a8-427f-8294-04aef401cae1 13052 2 2020-03-08 15:23:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cf2da14d-06d0-4c93-ad9e-6cfee4a7f278 0xc002957207 0xc002957208}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002957288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:23:27.471: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 15:23:27.471: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4758 /apis/apps/v1/namespaces/deployment-4758/replicasets/test-rollover-controller 74a245a5-f471-4795-a4a4-f1d7cac13649 13062 2 2020-03-08 15:23:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cf2da14d-06d0-4c93-ad9e-6cfee4a7f278 0xc002957137 0xc002957138}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002957198 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:23:27.471: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4758 /apis/apps/v1/namespaces/deployment-4758/replicasets/test-rollover-deployment-f6c94f66c dbd28617-1f3e-4ee6-a591-0592ee3c92cb 13007 2 2020-03-08 15:23:10 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cf2da14d-06d0-4c93-ad9e-6cfee4a7f278 0xc002957320 0xc002957321}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029574a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:23:27.474: INFO: Pod "test-rollover-deployment-574d6dfbff-4v785" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-4v785 test-rollover-deployment-574d6dfbff- deployment-4758 /api/v1/namespaces/deployment-4758/pods/test-rollover-deployment-574d6dfbff-4v785 26276a55-da19-49f8-bada-2d8445b542d4 13023 0 2020-03-08 15:23:13 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 3c4e967c-40a8-427f-8294-04aef401cae1 0xc002957dd7 0xc002957dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl2x8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl2x8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl2x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.102,StartTime:2020-03-08 15:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 15:23:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://dfc45f509815e5196ac078f631101107c08710ee519504f8ca6021f6c1bd7c4d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:27.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4758" for this suite. • [SLOW TEST:23.647 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":75,"skipped":1211,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:27.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:23:27.559: INFO: Creating deployment "test-recreate-deployment" Mar 8 15:23:27.564: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 8 15:23:27.630: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 8 15:23:29.638: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 8 15:23:29.640: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 8 15:23:29.647: INFO: Updating deployment test-recreate-deployment Mar 8 15:23:29.647: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 15:23:29.896: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9542 /apis/apps/v1/namespaces/deployment-9542/deployments/test-recreate-deployment 5b7823e4-3089-4e6a-8ea9-b09961f3a87a 13120 2 2020-03-08 15:23:27 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00267fc78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 15:23:29 +0000 UTC,LastTransitionTime:2020-03-08 15:23:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-08 15:23:29 +0000 UTC,LastTransitionTime:2020-03-08 15:23:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 8 15:23:29.908: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9542 /apis/apps/v1/namespaces/deployment-9542/replicasets/test-recreate-deployment-5f94c574ff ab197ba2-6c77-4250-a54a-878b7a05ccb9 13118 1 2020-03-08 15:23:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5b7823e4-3089-4e6a-8ea9-b09961f3a87a 0xc0008961c7 0xc0008961c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000896308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:23:29.908: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 8 15:23:29.908: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9542 /apis/apps/v1/namespaces/deployment-9542/replicasets/test-recreate-deployment-799c574856 ef74b0e3-91ad-4808-a893-ed4e651c2098 13109 2 2020-03-08 15:23:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5b7823e4-3089-4e6a-8ea9-b09961f3a87a 0xc000896457 0xc000896458}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000896598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 15:23:29.913: INFO: Pod "test-recreate-deployment-5f94c574ff-7gbrj" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-7gbrj test-recreate-deployment-5f94c574ff- deployment-9542 /api/v1/namespaces/deployment-9542/pods/test-recreate-deployment-5f94c574ff-7gbrj eed697a0-bc3a-4e08-9a53-cd3e5f063a85 13122 0 2020-03-08 15:23:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ab197ba2-6c77-4250-a54a-878b7a05ccb9 0xc000896b77 0xc000896b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-twqsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-twqsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-twqsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:23:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 15:23:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:29.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9542" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":76,"skipped":1211,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:29.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-c06c47d4-f877-4494-b529-7c2627adb294 STEP: Creating a pod to test consume configMaps Mar 8 15:23:30.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4" in namespace "configmap-2747" to be "success or failure" Mar 8 15:23:30.046: INFO: Pod "pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.890238ms Mar 8 15:23:32.050: INFO: Pod "pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011837732s Mar 8 15:23:34.053: INFO: Pod "pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014904384s STEP: Saw pod success Mar 8 15:23:34.053: INFO: Pod "pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4" satisfied condition "success or failure" Mar 8 15:23:34.055: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4 container configmap-volume-test: STEP: delete the pod Mar 8 15:23:34.095: INFO: Waiting for pod pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4 to disappear Mar 8 15:23:34.138: INFO: Pod pod-configmaps-7ca42a8c-4bc1-4ed9-84e2-729399b8ceb4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:34.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2747" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1214,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:34.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 15:23:40.224749 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:23:40.224: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:40.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9588" for this suite. • [SLOW TEST:6.086 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":78,"skipped":1224,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:40.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-8502 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8502 STEP: Deleting pre-stop pod Mar 8 15:23:53.446: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:23:53.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8502" for this suite. • [SLOW TEST:13.237 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":79,"skipped":1232,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:23:53.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9628.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9628.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 15:24:09.667: INFO: DNS probes using dns-9628/dns-test-cb82c8ef-4ef5-4327-8ff2-c40f31321a4e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:24:09.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9628" for this suite. • [SLOW TEST:16.290 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":280,"completed":80,"skipped":1242,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:24:09.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-0c5a27b2-c1f7-4315-945e-650e6de4019e in namespace container-probe-1558 Mar 8 15:24:11.858: INFO: Started pod busybox-0c5a27b2-c1f7-4315-945e-650e6de4019e in namespace container-probe-1558 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:24:11.861: INFO: Initial restart count of pod busybox-0c5a27b2-c1f7-4315-945e-650e6de4019e is 0 Mar 8 15:25:06.192: INFO: Restart count of pod container-probe-1558/busybox-0c5a27b2-c1f7-4315-945e-650e6de4019e is now 1 (54.331188923s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:06.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1558" for this suite. • [SLOW TEST:56.470 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":81,"skipped":1243,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:06.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 15:25:08.388: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-289" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1245,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:08.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 15:25:08.482: INFO: Waiting up to 5m0s for pod "pod-3540c7ef-9452-4ba4-93d7-868719a80ecc" in namespace "emptydir-8224" to be "success or failure" Mar 8 15:25:08.508: INFO: Pod "pod-3540c7ef-9452-4ba4-93d7-868719a80ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.039339ms Mar 8 15:25:10.511: INFO: Pod "pod-3540c7ef-9452-4ba4-93d7-868719a80ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029409436s Mar 8 15:25:12.515: INFO: Pod "pod-3540c7ef-9452-4ba4-93d7-868719a80ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03317302s STEP: Saw pod success Mar 8 15:25:12.515: INFO: Pod "pod-3540c7ef-9452-4ba4-93d7-868719a80ecc" satisfied condition "success or failure" Mar 8 15:25:12.518: INFO: Trying to get logs from node latest-worker pod pod-3540c7ef-9452-4ba4-93d7-868719a80ecc container test-container: STEP: delete the pod Mar 8 15:25:12.552: INFO: Waiting for pod pod-3540c7ef-9452-4ba4-93d7-868719a80ecc to disappear Mar 8 15:25:12.556: INFO: Pod pod-3540c7ef-9452-4ba4-93d7-868719a80ecc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:12.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8224" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":83,"skipped":1249,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:12.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 8 15:25:15.191: INFO: Successfully updated pod "labelsupdate619f1921-fd81-4616-86e5-47e92d9eee48" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:17.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6373" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":84,"skipped":1262,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:17.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8551" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":85,"skipped":1266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:20.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2934 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2934 STEP: creating replication controller externalsvc in namespace services-2934 I0308 15:25:20.514089 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2934, replica count: 2 I0308 15:25:23.564700 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 8 15:25:23.646: INFO: Creating new exec pod Mar 8 15:25:25.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-2934 execpodjkl9r -- /bin/sh -x -c nslookup nodeport-service' Mar 8 15:25:25.876: INFO: stderr: "I0308 15:25:25.794459 899 log.go:172] (0xc00001a160) (0xc0006efae0) Create stream\nI0308 15:25:25.794511 899 log.go:172] (0xc00001a160) (0xc0006efae0) Stream added, broadcasting: 1\nI0308 15:25:25.796744 899 log.go:172] (0xc00001a160) Reply frame received for 1\nI0308 15:25:25.796771 899 log.go:172] (0xc00001a160) (0xc0008b6000) Create stream\nI0308 15:25:25.796780 899 log.go:172] (0xc00001a160) (0xc0008b6000) Stream added, broadcasting: 3\nI0308 15:25:25.797694 899 log.go:172] (0xc00001a160) Reply frame received for 3\nI0308 15:25:25.797724 899 log.go:172] (0xc00001a160) (0xc0000c0000) Create stream\nI0308 15:25:25.797734 899 log.go:172] (0xc00001a160) (0xc0000c0000) Stream added, broadcasting: 5\nI0308 15:25:25.798769 899 log.go:172] (0xc00001a160) Reply frame received for 5\nI0308 15:25:25.863116 899 log.go:172] (0xc00001a160) Data frame received for 5\nI0308 15:25:25.863140 899 log.go:172] (0xc0000c0000) (5) Data frame handling\nI0308 15:25:25.863152 899 log.go:172] (0xc0000c0000) (5) Data frame sent\n+ nslookup nodeport-service\nI0308 15:25:25.869727 899 log.go:172] (0xc00001a160) Data frame received for 3\nI0308 15:25:25.869756 899 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0308 15:25:25.869770 899 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0308 15:25:25.870748 899 log.go:172] (0xc00001a160) Data frame received for 3\nI0308 15:25:25.870763 899 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0308 15:25:25.870773 899 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0308 15:25:25.871009 899 log.go:172] (0xc00001a160) Data frame received for 3\nI0308 15:25:25.871049 899 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0308 15:25:25.871280 899 log.go:172] (0xc00001a160) Data frame received for 5\nI0308 15:25:25.871289 899 log.go:172] (0xc0000c0000) (5) Data frame handling\nI0308 15:25:25.872884 899 log.go:172] (0xc00001a160) Data frame received for 1\nI0308 15:25:25.872908 899 log.go:172] (0xc0006efae0) (1) Data frame handling\nI0308 15:25:25.872926 899 log.go:172] (0xc0006efae0) (1) Data frame sent\nI0308 15:25:25.873095 899 log.go:172] (0xc00001a160) (0xc0006efae0) Stream removed, broadcasting: 1\nI0308 15:25:25.873122 899 log.go:172] (0xc00001a160) Go away received\nI0308 15:25:25.873628 899 log.go:172] (0xc00001a160) (0xc0006efae0) Stream removed, broadcasting: 1\nI0308 15:25:25.873647 899 log.go:172] (0xc00001a160) (0xc0008b6000) Stream removed, broadcasting: 3\nI0308 15:25:25.873660 899 log.go:172] (0xc00001a160) (0xc0000c0000) Stream removed, broadcasting: 5\n" Mar 8 15:25:25.877: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2934.svc.cluster.local\tcanonical name = externalsvc.services-2934.svc.cluster.local.\nName:\texternalsvc.services-2934.svc.cluster.local\nAddress: 10.96.179.26\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2934, will wait for the garbage collector to delete the pods Mar 8 15:25:25.934: INFO: Deleting ReplicationController externalsvc took: 4.612271ms Mar 8 15:25:26.235: INFO: Terminating ReplicationController externalsvc pods took: 300.221261ms Mar 8 15:25:32.733: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:32.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2934" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.452 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":86,"skipped":1290,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:32.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1111.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1111.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1111.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1111.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1111.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1111.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 15:25:37.003: INFO: DNS probes using dns-1111/dns-test-2467863e-703f-461a-b908-d42dd1b0c464 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:37.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1111" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":87,"skipped":1293,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:37.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9329.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9329.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9329.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9329.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 15:25:41.272: INFO: DNS probes using dns-9329/dns-test-e56a16a7-e9e3-436e-b2de-e1e70af6bcf8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:41.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9329" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":88,"skipped":1299,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:41.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:25:41.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a" in namespace "downward-api-1583" to be "success or failure" Mar 8 15:25:41.431: INFO: Pod "downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.567907ms Mar 8 15:25:43.435: INFO: Pod "downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011221606s Mar 8 15:25:45.438: INFO: Pod "downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014976621s STEP: Saw pod success Mar 8 15:25:45.438: INFO: Pod "downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a" satisfied condition "success or failure" Mar 8 15:25:45.441: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a container client-container: STEP: delete the pod Mar 8 15:25:45.455: INFO: Waiting for pod downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a to disappear Mar 8 15:25:45.460: INFO: Pod downwardapi-volume-2a739bb8-dd7b-4928-8183-a3f29d132b3a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:45.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1583" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":89,"skipped":1320,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:45.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 15:25:45.529: INFO: Waiting up to 5m0s for pod "pod-83218677-a6f7-48b0-90b3-9524fd412202" in namespace "emptydir-6955" to be "success or failure" Mar 8 15:25:45.546: INFO: Pod "pod-83218677-a6f7-48b0-90b3-9524fd412202": Phase="Pending", Reason="", readiness=false. Elapsed: 16.384998ms Mar 8 15:25:47.550: INFO: Pod "pod-83218677-a6f7-48b0-90b3-9524fd412202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02039649s Mar 8 15:25:49.553: INFO: Pod "pod-83218677-a6f7-48b0-90b3-9524fd412202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023448648s STEP: Saw pod success Mar 8 15:25:49.553: INFO: Pod "pod-83218677-a6f7-48b0-90b3-9524fd412202" satisfied condition "success or failure" Mar 8 15:25:49.554: INFO: Trying to get logs from node latest-worker pod pod-83218677-a6f7-48b0-90b3-9524fd412202 container test-container: STEP: delete the pod Mar 8 15:25:49.581: INFO: Waiting for pod pod-83218677-a6f7-48b0-90b3-9524fd412202 to disappear Mar 8 15:25:49.586: INFO: Pod pod-83218677-a6f7-48b0-90b3-9524fd412202 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:49.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6955" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":90,"skipped":1338,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:49.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 15:25:49.647: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 15:25:49.656: INFO: Waiting for terminating namespaces to be deleted... Mar 8 15:25:49.658: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 15:25:49.663: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:25:49.663: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:25:49.663: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:25:49.663: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:25:49.663: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 15:25:49.678: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:25:49.678: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:25:49.678: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:25:49.678: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:25:49.678: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 15:25:49.678: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fbcf258d-220d-4a00-a8f8-d915cf1937f6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fbcf258d-220d-4a00-a8f8-d915cf1937f6 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-fbcf258d-220d-4a00-a8f8-d915cf1937f6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:25:53.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1730" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":91,"skipped":1353,"failed":0} ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:25:53.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 in namespace container-probe-3157 Mar 8 15:25:55.929: INFO: Started pod liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 in namespace container-probe-3157 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:25:55.931: INFO: Initial restart count of pod liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is 0 Mar 8 15:26:08.003: INFO: Restart count of pod container-probe-3157/liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is now 1 (12.071747804s elapsed) Mar 8 15:26:28.045: INFO: Restart count of pod container-probe-3157/liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is now 2 (32.113113357s elapsed) Mar 8 15:26:48.082: INFO: Restart count of pod container-probe-3157/liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is now 3 (52.150540823s elapsed) Mar 8 15:27:08.122: INFO: Restart count of pod container-probe-3157/liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is now 4 (1m12.190143609s elapsed) Mar 8 15:28:16.344: INFO: Restart count of pod container-probe-3157/liveness-2e1a99d0-c0b9-4c0c-8bab-288db6e07394 is now 5 (2m20.412195075s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:28:16.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3157" for this suite. • [SLOW TEST:142.592 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1353,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:28:16.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 15:28:20.547: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:20.553: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:22.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:22.557: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:24.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:24.558: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:26.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:26.590: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:28.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:28.557: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:30.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:30.557: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 15:28:32.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 15:28:32.567: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:28:32.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3871" for this suite. • [SLOW TEST:16.148 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":93,"skipped":1361,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:28:32.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 15:28:32.645: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:32.649: INFO: Number of nodes with available pods: 0 Mar 8 15:28:32.649: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:28:33.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:33.657: INFO: Number of nodes with available pods: 0 Mar 8 15:28:33.657: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:28:34.653: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:34.687: INFO: Number of nodes with available pods: 1 Mar 8 15:28:34.687: INFO: Node latest-worker is running more than one daemon pod Mar 8 15:28:35.653: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:35.657: INFO: Number of nodes with available pods: 2 Mar 8 15:28:35.657: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 8 15:28:35.687: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:35.691: INFO: Number of nodes with available pods: 2 Mar 8 15:28:35.691: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1542, will wait for the garbage collector to delete the pods Mar 8 15:28:36.777: INFO: Deleting DaemonSet.extensions daemon-set took: 4.682911ms Mar 8 15:28:37.277: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.231485ms Mar 8 15:29:45.181: INFO: Number of nodes with available pods: 0 Mar 8 15:29:45.181: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:29:45.209: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1542/daemonsets","resourceVersion":"15195"},"items":null} Mar 8 15:29:45.212: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1542/pods","resourceVersion":"15195"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:29:45.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1542" for this suite. • [SLOW TEST:72.652 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":94,"skipped":1366,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:29:45.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Mar 8 15:29:47.816: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8838 pod-service-account-31baff44-cb81-43da-bc61-fd5442c1aa8c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 8 15:29:48.642: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8838 pod-service-account-31baff44-cb81-43da-bc61-fd5442c1aa8c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 8 15:29:48.879: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8838 pod-service-account-31baff44-cb81-43da-bc61-fd5442c1aa8c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:29:49.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8838" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":95,"skipped":1387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:29:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:29:50.327: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:29:52.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:29:55.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:29:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5621" for this suite. STEP: Destroying namespace "webhook-5621-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.459 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":96,"skipped":1428,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:29:55.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:29:56.472: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:29:58.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278196, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278196, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278196, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278196, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:30:01.514: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:30:13.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1796" for this suite. STEP: Destroying namespace "webhook-1796-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.188 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":97,"skipped":1430,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:30:13.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 8 15:30:13.821: INFO: Waiting up to 5m0s for pod "pod-1a760110-d85f-411e-931a-159294df3993" in namespace "emptydir-3761" to be "success or failure" Mar 8 15:30:13.826: INFO: Pod "pod-1a760110-d85f-411e-931a-159294df3993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.827241ms Mar 8 15:30:15.830: INFO: Pod "pod-1a760110-d85f-411e-931a-159294df3993": Phase="Running", Reason="", readiness=true. Elapsed: 2.009387497s Mar 8 15:30:17.835: INFO: Pod "pod-1a760110-d85f-411e-931a-159294df3993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013708186s STEP: Saw pod success Mar 8 15:30:17.835: INFO: Pod "pod-1a760110-d85f-411e-931a-159294df3993" satisfied condition "success or failure" Mar 8 15:30:17.838: INFO: Trying to get logs from node latest-worker pod pod-1a760110-d85f-411e-931a-159294df3993 container test-container: STEP: delete the pod Mar 8 15:30:17.869: INFO: Waiting for pod pod-1a760110-d85f-411e-931a-159294df3993 to disappear Mar 8 15:30:17.880: INFO: Pod pod-1a760110-d85f-411e-931a-159294df3993 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:30:17.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3761" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1447,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:30:17.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-8fb7bf15-712b-4474-87b7-d74a2d2627a4 STEP: Creating a pod to test consume configMaps Mar 8 15:30:17.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74" in namespace "configmap-6174" to be "success or failure" Mar 8 15:30:17.982: INFO: Pod "pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284917ms Mar 8 15:30:19.985: INFO: Pod "pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74": Phase="Running", Reason="", readiness=true. Elapsed: 2.007045314s Mar 8 15:30:21.988: INFO: Pod "pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010621178s STEP: Saw pod success Mar 8 15:30:21.988: INFO: Pod "pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74" satisfied condition "success or failure" Mar 8 15:30:21.990: INFO: Trying to get logs from node latest-worker pod pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74 container configmap-volume-test: STEP: delete the pod Mar 8 15:30:22.024: INFO: Waiting for pod pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74 to disappear Mar 8 15:30:22.048: INFO: Pod pod-configmaps-61fcee00-1bde-4349-8f4a-f79d86487b74 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:30:22.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6174" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":99,"skipped":1453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:30:22.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 15:30:22.665: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:30:25.726: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:30:25.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:30:27.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5041" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.011 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":100,"skipped":1478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:30:27.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 15:30:30.175: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:30:30.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1302" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1503,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:30:30.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-9716d31c-a45c-4871-ab7a-f5e644ffec34 STEP: Creating configMap with name cm-test-opt-upd-4fb3e689-2609-4485-97ef-3ea30ed93369 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9716d31c-a45c-4871-ab7a-f5e644ffec34 STEP: Updating configmap cm-test-opt-upd-4fb3e689-2609-4485-97ef-3ea30ed93369 STEP: Creating configMap with name cm-test-opt-create-ea8c67c8-7b7a-4bff-9610-d59e7aa7268c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:31:42.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6804" for this suite. • [SLOW TEST:72.508 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":102,"skipped":1506,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:31:42.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2921 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-2921 Mar 8 15:31:42.819: INFO: Found 0 stateful pods, waiting for 1 Mar 8 15:31:52.824: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 15:31:52.863: INFO: Deleting all statefulset in ns statefulset-2921 Mar 8 15:31:52.869: INFO: Scaling statefulset ss to 0 Mar 8 15:32:12.921: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:32:12.924: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:12.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2921" for this suite. • [SLOW TEST:30.209 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":103,"skipped":1521,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:12.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 8 15:32:17.577: INFO: Successfully updated pod "adopt-release-mw2mz" STEP: Checking that the Job readopts the Pod Mar 8 15:32:17.577: INFO: Waiting up to 15m0s for pod "adopt-release-mw2mz" in namespace "job-6401" to be "adopted" Mar 8 15:32:17.580: INFO: Pod "adopt-release-mw2mz": Phase="Running", Reason="", readiness=true. Elapsed: 3.283041ms Mar 8 15:32:19.584: INFO: Pod "adopt-release-mw2mz": Phase="Running", Reason="", readiness=true. Elapsed: 2.00656631s Mar 8 15:32:19.584: INFO: Pod "adopt-release-mw2mz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 8 15:32:20.092: INFO: Successfully updated pod "adopt-release-mw2mz" STEP: Checking that the Job releases the Pod Mar 8 15:32:20.092: INFO: Waiting up to 15m0s for pod "adopt-release-mw2mz" in namespace "job-6401" to be "released" Mar 8 15:32:20.097: INFO: Pod "adopt-release-mw2mz": Phase="Running", Reason="", readiness=true. Elapsed: 5.761105ms Mar 8 15:32:22.101: INFO: Pod "adopt-release-mw2mz": Phase="Running", Reason="", readiness=true. Elapsed: 2.009130985s Mar 8 15:32:22.101: INFO: Pod "adopt-release-mw2mz" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:22.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6401" for this suite. • [SLOW TEST:9.161 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":104,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:22.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 8 15:32:22.181: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 8 15:32:27.184: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:28.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6839" for this suite. • [SLOW TEST:6.096 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":105,"skipped":1558,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:28.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:32:28.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1" in namespace "projected-1067" to be "success or failure" Mar 8 15:32:28.286: INFO: Pod "downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.972673ms Mar 8 15:32:30.289: INFO: Pod "downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021223318s Mar 8 15:32:32.293: INFO: Pod "downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024965485s STEP: Saw pod success Mar 8 15:32:32.293: INFO: Pod "downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1" satisfied condition "success or failure" Mar 8 15:32:32.296: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1 container client-container: STEP: delete the pod Mar 8 15:32:32.327: INFO: Waiting for pod downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1 to disappear Mar 8 15:32:32.332: INFO: Pod downwardapi-volume-f6b947a3-ef62-47dd-be08-4079b54e7ea1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:32.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1067" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":106,"skipped":1569,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:32.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-597a2c2c-700e-4645-ba7a-2cf32d66c208 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1620" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":107,"skipped":1582,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:32.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:32:32.488: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.649303ms) Mar 8 15:32:32.491: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.387562ms) Mar 8 15:32:32.494: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.001884ms) Mar 8 15:32:32.497: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.873283ms) Mar 8 15:32:32.501: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.319651ms) Mar 8 15:32:32.503: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.625771ms) Mar 8 15:32:32.506: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.543302ms) Mar 8 15:32:32.509: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.803736ms) Mar 8 15:32:32.512: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.791991ms) Mar 8 15:32:32.514: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.641504ms) Mar 8 15:32:32.535: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.782589ms) Mar 8 15:32:32.542: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 7.37676ms) Mar 8 15:32:32.545: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.618545ms) Mar 8 15:32:32.548: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.72878ms) Mar 8 15:32:32.551: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.675635ms) Mar 8 15:32:32.553: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.79126ms) Mar 8 15:32:32.559: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.53731ms) Mar 8 15:32:32.562: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.071764ms) Mar 8 15:32:32.565: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.846348ms) Mar 8 15:32:32.568: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.768693ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:32.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1209" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":280,"completed":108,"skipped":1589,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:32.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 15:32:32.619: INFO: Waiting up to 5m0s for pod "pod-8e035e72-249a-4b17-b3c4-788236e6d5c7" in namespace "emptydir-1546" to be "success or failure" Mar 8 15:32:32.660: INFO: Pod "pod-8e035e72-249a-4b17-b3c4-788236e6d5c7": Phase="Pending", Reason="", readiness=false. Elapsed: 41.020801ms Mar 8 15:32:34.664: INFO: Pod "pod-8e035e72-249a-4b17-b3c4-788236e6d5c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044221508s STEP: Saw pod success Mar 8 15:32:34.664: INFO: Pod "pod-8e035e72-249a-4b17-b3c4-788236e6d5c7" satisfied condition "success or failure" Mar 8 15:32:34.679: INFO: Trying to get logs from node latest-worker pod pod-8e035e72-249a-4b17-b3c4-788236e6d5c7 container test-container: STEP: delete the pod Mar 8 15:32:34.724: INFO: Waiting for pod pod-8e035e72-249a-4b17-b3c4-788236e6d5c7 to disappear Mar 8 15:32:34.727: INFO: Pod pod-8e035e72-249a-4b17-b3c4-788236e6d5c7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:34.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1546" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1595,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:34.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:32:34.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573" in namespace "downward-api-8934" to be "success or failure" Mar 8 15:32:34.822: INFO: Pod "downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238739ms Mar 8 15:32:36.826: INFO: Pod "downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009712118s STEP: Saw pod success Mar 8 15:32:36.826: INFO: Pod "downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573" satisfied condition "success or failure" Mar 8 15:32:36.828: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573 container client-container: STEP: delete the pod Mar 8 15:32:36.918: INFO: Waiting for pod downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573 to disappear Mar 8 15:32:36.921: INFO: Pod downwardapi-volume-244f00c0-721e-4e7f-b2eb-7cf2ac393573 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:36.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8934" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1601,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:36.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:32:37.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:32:40.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-235" for this suite. STEP: Destroying namespace "webhook-235-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":111,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:40.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Mar 8 15:32:41.454: INFO: created pod pod-service-account-defaultsa Mar 8 15:32:41.454: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 15:32:41.463: INFO: created pod pod-service-account-mountsa Mar 8 15:32:41.463: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 15:32:41.469: INFO: created pod pod-service-account-nomountsa Mar 8 15:32:41.469: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 15:32:41.482: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 15:32:41.482: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 15:32:41.487: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 15:32:41.487: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 15:32:41.517: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 15:32:41.518: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 15:32:41.553: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 15:32:41.553: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 15:32:41.582: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 15:32:41.582: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 15:32:41.601: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 15:32:41.601: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:41.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5211" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":112,"skipped":1617,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:41.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-3437ec3c-ace4-44b3-9a08-a4a6bf2cb37b STEP: Creating a pod to test consume configMaps Mar 8 15:32:41.856: INFO: Waiting up to 5m0s for pod "pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672" in namespace "configmap-8390" to be "success or failure" Mar 8 15:32:41.955: INFO: Pod "pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672": Phase="Pending", Reason="", readiness=false. Elapsed: 99.262231ms Mar 8 15:32:43.959: INFO: Pod "pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103441838s Mar 8 15:32:45.962: INFO: Pod "pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105983516s STEP: Saw pod success Mar 8 15:32:45.962: INFO: Pod "pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672" satisfied condition "success or failure" Mar 8 15:32:45.964: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672 container configmap-volume-test: STEP: delete the pod Mar 8 15:32:45.982: INFO: Waiting for pod pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672 to disappear Mar 8 15:32:45.991: INFO: Pod pod-configmaps-0160c6f4-0728-4106-a029-022bafe8a672 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:45.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8390" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1624,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:45.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:32:46.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version' Mar 8 15:32:46.191: INFO: stderr: "" Mar 8 15:32:46.191: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:46.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1836" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":114,"skipped":1634,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:46.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Mar 8 15:32:46.262: INFO: Waiting up to 5m0s for pod "var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b" in namespace "var-expansion-3535" to be "success or failure" Mar 8 15:32:46.267: INFO: Pod "var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3616ms Mar 8 15:32:48.270: INFO: Pod "var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007772295s Mar 8 15:32:50.275: INFO: Pod "var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012072177s STEP: Saw pod success Mar 8 15:32:50.275: INFO: Pod "var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b" satisfied condition "success or failure" Mar 8 15:32:50.277: INFO: Trying to get logs from node latest-worker pod var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b container dapi-container: STEP: delete the pod Mar 8 15:32:50.299: INFO: Waiting for pod var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b to disappear Mar 8 15:32:50.332: INFO: Pod var-expansion-04598332-4424-4450-8c5f-669cd6f6c08b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:50.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3535" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:50.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-c5149375-8039-4266-b73b-5b9e92f8a304 STEP: Creating a pod to test consume secrets Mar 8 15:32:50.471: INFO: Waiting up to 5m0s for pod "pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac" in namespace "secrets-5044" to be "success or failure" Mar 8 15:32:50.481: INFO: Pod "pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.823925ms Mar 8 15:32:52.484: INFO: Pod "pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01320675s STEP: Saw pod success Mar 8 15:32:52.484: INFO: Pod "pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac" satisfied condition "success or failure" Mar 8 15:32:52.489: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac container secret-volume-test: STEP: delete the pod Mar 8 15:32:52.540: INFO: Waiting for pod pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac to disappear Mar 8 15:32:52.549: INFO: Pod pod-secrets-43786231-1af4-4c15-bcf6-53bb64bd57ac no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:52.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5044" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1672,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:52.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3110" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":117,"skipped":1691,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:54.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 15:32:54.726: INFO: Waiting up to 5m0s for pod "pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1" in namespace "emptydir-2729" to be "success or failure" Mar 8 15:32:54.742: INFO: Pod "pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.32954ms Mar 8 15:32:56.745: INFO: Pod "pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019482919s Mar 8 15:32:58.749: INFO: Pod "pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023458973s STEP: Saw pod success Mar 8 15:32:58.749: INFO: Pod "pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1" satisfied condition "success or failure" Mar 8 15:32:58.753: INFO: Trying to get logs from node latest-worker2 pod pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1 container test-container: STEP: delete the pod Mar 8 15:32:58.822: INFO: Waiting for pod pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1 to disappear Mar 8 15:32:58.837: INFO: Pod pod-c3b6bb31-0bcd-469d-a23a-5212387f08a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:32:58.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2729" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":118,"skipped":1700,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:32:58.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:32:58.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e" in namespace "downward-api-3496" to be "success or failure" Mar 8 15:32:58.903: INFO: Pod "downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387213ms Mar 8 15:33:00.905: INFO: Pod "downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008653374s STEP: Saw pod success Mar 8 15:33:00.905: INFO: Pod "downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e" satisfied condition "success or failure" Mar 8 15:33:00.906: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e container client-container: STEP: delete the pod Mar 8 15:33:00.954: INFO: Waiting for pod downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e to disappear Mar 8 15:33:00.966: INFO: Pod downwardapi-volume-78b2c6cb-9a79-4102-8c5e-fedf387f0e9e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:00.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3496" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1703,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:00.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 8 15:33:01.058: INFO: Created pod &Pod{ObjectMeta:{dns-2261 dns-2261 /api/v1/namespaces/dns-2261/pods/dns-2261 5d6abad6-cecb-4f64-9700-afe25d4eae61 16738 0 2020-03-08 15:33:01 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wpw8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wpw8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wpw8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 15:33:01.069: INFO: The status of Pod dns-2261 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:33:03.081: INFO: The status of Pod dns-2261 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 8 15:33:03.081: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2261 PodName:dns-2261 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:33:03.081: INFO: >>> kubeConfig: /root/.kube/config I0308 15:33:03.460275 7 log.go:172] (0xc0028446e0) (0xc002824460) Create stream I0308 15:33:03.460319 7 log.go:172] (0xc0028446e0) (0xc002824460) Stream added, broadcasting: 1 I0308 15:33:03.463141 7 log.go:172] (0xc0028446e0) Reply frame received for 1 I0308 15:33:03.463204 7 log.go:172] (0xc0028446e0) (0xc001a120a0) Create stream I0308 15:33:03.463222 7 log.go:172] (0xc0028446e0) (0xc001a120a0) Stream added, broadcasting: 3 I0308 15:33:03.465955 7 log.go:172] (0xc0028446e0) Reply frame received for 3 I0308 15:33:03.466006 7 log.go:172] (0xc0028446e0) (0xc001b54000) Create stream I0308 15:33:03.466027 7 log.go:172] (0xc0028446e0) (0xc001b54000) Stream added, broadcasting: 5 I0308 15:33:03.467136 7 log.go:172] (0xc0028446e0) Reply frame received for 5 I0308 15:33:03.573784 7 log.go:172] (0xc0028446e0) Data frame received for 3 I0308 15:33:03.573806 7 log.go:172] (0xc001a120a0) (3) Data frame handling I0308 15:33:03.573821 7 log.go:172] (0xc001a120a0) (3) Data frame sent I0308 15:33:03.574166 7 log.go:172] (0xc0028446e0) Data frame received for 5 I0308 15:33:03.574195 7 log.go:172] (0xc001b54000) (5) Data frame handling I0308 15:33:03.574390 7 log.go:172] (0xc0028446e0) Data frame received for 3 I0308 15:33:03.574407 7 log.go:172] (0xc001a120a0) (3) Data frame handling I0308 15:33:03.575781 7 log.go:172] (0xc0028446e0) Data frame received for 1 I0308 15:33:03.575798 7 log.go:172] (0xc002824460) (1) Data frame handling I0308 15:33:03.575817 7 log.go:172] (0xc002824460) (1) Data frame sent I0308 15:33:03.576041 7 log.go:172] (0xc0028446e0) (0xc002824460) Stream removed, broadcasting: 1 I0308 15:33:03.576079 7 log.go:172] (0xc0028446e0) Go away received I0308 15:33:03.576182 7 log.go:172] (0xc0028446e0) (0xc002824460) Stream removed, broadcasting: 1 I0308 15:33:03.576210 7 log.go:172] (0xc0028446e0) (0xc001a120a0) Stream removed, broadcasting: 3 I0308 15:33:03.576245 7 log.go:172] (0xc0028446e0) (0xc001b54000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 8 15:33:03.576: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2261 PodName:dns-2261 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:33:03.576: INFO: >>> kubeConfig: /root/.kube/config I0308 15:33:03.607433 7 log.go:172] (0xc001fd2dc0) (0xc001a12460) Create stream I0308 15:33:03.607456 7 log.go:172] (0xc001fd2dc0) (0xc001a12460) Stream added, broadcasting: 1 I0308 15:33:03.610753 7 log.go:172] (0xc001fd2dc0) Reply frame received for 1 I0308 15:33:03.610783 7 log.go:172] (0xc001fd2dc0) (0xc001a12640) Create stream I0308 15:33:03.610794 7 log.go:172] (0xc001fd2dc0) (0xc001a12640) Stream added, broadcasting: 3 I0308 15:33:03.612706 7 log.go:172] (0xc001fd2dc0) Reply frame received for 3 I0308 15:33:03.612738 7 log.go:172] (0xc001fd2dc0) (0xc001a12aa0) Create stream I0308 15:33:03.612752 7 log.go:172] (0xc001fd2dc0) (0xc001a12aa0) Stream added, broadcasting: 5 I0308 15:33:03.613802 7 log.go:172] (0xc001fd2dc0) Reply frame received for 5 I0308 15:33:03.688563 7 log.go:172] (0xc001fd2dc0) Data frame received for 3 I0308 15:33:03.688595 7 log.go:172] (0xc001a12640) (3) Data frame handling I0308 15:33:03.688614 7 log.go:172] (0xc001a12640) (3) Data frame sent I0308 15:33:03.688903 7 log.go:172] (0xc001fd2dc0) Data frame received for 3 I0308 15:33:03.688920 7 log.go:172] (0xc001a12640) (3) Data frame handling I0308 15:33:03.689025 7 log.go:172] (0xc001fd2dc0) Data frame received for 5 I0308 15:33:03.689045 7 log.go:172] (0xc001a12aa0) (5) Data frame handling I0308 15:33:03.690562 7 log.go:172] (0xc001fd2dc0) Data frame received for 1 I0308 15:33:03.690589 7 log.go:172] (0xc001a12460) (1) Data frame handling I0308 15:33:03.690613 7 log.go:172] (0xc001a12460) (1) Data frame sent I0308 15:33:03.690632 7 log.go:172] (0xc001fd2dc0) (0xc001a12460) Stream removed, broadcasting: 1 I0308 15:33:03.690659 7 log.go:172] (0xc001fd2dc0) Go away received I0308 15:33:03.690714 7 log.go:172] (0xc001fd2dc0) (0xc001a12460) Stream removed, broadcasting: 1 I0308 15:33:03.690734 7 log.go:172] (0xc001fd2dc0) (0xc001a12640) Stream removed, broadcasting: 3 I0308 15:33:03.690745 7 log.go:172] (0xc001fd2dc0) (0xc001a12aa0) Stream removed, broadcasting: 5 Mar 8 15:33:03.690: INFO: Deleting pod dns-2261... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:03.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2261" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":120,"skipped":1711,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:03.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9384" for this suite. STEP: Destroying namespace "nsdeletetest-7632" for this suite. Mar 8 15:33:17.005: INFO: Namespace nsdeletetest-7632 was already deleted STEP: Destroying namespace "nsdeletetest-1969" for this suite. • [SLOW TEST:13.286 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":121,"skipped":1715,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-b16bb905-454a-4a1c-841c-13a66d7c35b4 STEP: Creating a pod to test consume configMaps Mar 8 15:33:17.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb" in namespace "configmap-692" to be "success or failure" Mar 8 15:33:17.101: INFO: Pod "pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.8824ms Mar 8 15:33:19.105: INFO: Pod "pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010040301s STEP: Saw pod success Mar 8 15:33:19.105: INFO: Pod "pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb" satisfied condition "success or failure" Mar 8 15:33:19.108: INFO: Trying to get logs from node latest-worker pod pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb container configmap-volume-test: STEP: delete the pod Mar 8 15:33:19.178: INFO: Waiting for pod pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb to disappear Mar 8 15:33:19.189: INFO: Pod pod-configmaps-99171a9a-81a7-4fd5-84bd-008f4eca8ceb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:19.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-692" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":1719,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:19.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 15:33:19.251: INFO: Waiting up to 5m0s for pod "downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b" in namespace "downward-api-247" to be "success or failure" Mar 8 15:33:19.255: INFO: Pod "downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9581ms Mar 8 15:33:21.259: INFO: Pod "downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008539004s Mar 8 15:33:23.263: INFO: Pod "downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012826087s STEP: Saw pod success Mar 8 15:33:23.263: INFO: Pod "downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b" satisfied condition "success or failure" Mar 8 15:33:23.266: INFO: Trying to get logs from node latest-worker pod downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b container dapi-container: STEP: delete the pod Mar 8 15:33:23.300: INFO: Waiting for pod downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b to disappear Mar 8 15:33:23.331: INFO: Pod downward-api-2d926d2c-b87d-4c0e-9cbe-9bf54ccf916b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:23.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-247" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":1721,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:23.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:39.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2826" for this suite. • [SLOW TEST:16.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":124,"skipped":1750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:39.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-99883676-aed5-492a-b013-49541b7df7b9 STEP: Creating a pod to test consume configMaps Mar 8 15:33:39.611: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1" in namespace "configmap-4790" to be "success or failure" Mar 8 15:33:39.617: INFO: Pod "pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143533ms Mar 8 15:33:41.622: INFO: Pod "pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011256872s STEP: Saw pod success Mar 8 15:33:41.622: INFO: Pod "pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1" satisfied condition "success or failure" Mar 8 15:33:41.624: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1 container configmap-volume-test: STEP: delete the pod Mar 8 15:33:41.641: INFO: Waiting for pod pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1 to disappear Mar 8 15:33:41.646: INFO: Pod pod-configmaps-5ed2f819-28c4-40e1-b669-5ed5ee4d86a1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:41.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4790" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":125,"skipped":1776,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:41.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 15:33:41.725: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 15:33:41.735: INFO: Waiting for terminating namespaces to be deleted... Mar 8 15:33:41.737: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 15:33:41.741: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:33:41.741: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:33:41.741: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:33:41.741: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:33:41.741: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 15:33:41.745: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:33:41.745: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:33:41.745: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 15:33:41.745: INFO: Container coredns ready: true, restart count 0 Mar 8 15:33:41.745: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:33:41.745: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa5e0b561717c6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:42.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1540" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":126,"skipped":1781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:42.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 8 15:33:42.873: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:33:59.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7694" for this suite. • [SLOW TEST:16.455 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":127,"skipped":1804,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:33:59.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-7dfk STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:33:59.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7dfk" in namespace "subpath-5933" to be "success or failure" Mar 8 15:33:59.335: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361009ms Mar 8 15:34:01.345: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 2.013488921s Mar 8 15:34:03.348: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 4.016755883s Mar 8 15:34:05.352: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 6.020930821s Mar 8 15:34:07.356: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 8.024832608s Mar 8 15:34:09.360: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 10.02850801s Mar 8 15:34:11.364: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 12.032491928s Mar 8 15:34:13.367: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 14.036408348s Mar 8 15:34:15.372: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 16.040602226s Mar 8 15:34:17.376: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 18.045227711s Mar 8 15:34:19.380: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Running", Reason="", readiness=true. Elapsed: 20.049225336s Mar 8 15:34:21.384: INFO: Pod "pod-subpath-test-downwardapi-7dfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053188974s STEP: Saw pod success Mar 8 15:34:21.384: INFO: Pod "pod-subpath-test-downwardapi-7dfk" satisfied condition "success or failure" Mar 8 15:34:21.387: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-7dfk container test-container-subpath-downwardapi-7dfk: STEP: delete the pod Mar 8 15:34:21.423: INFO: Waiting for pod pod-subpath-test-downwardapi-7dfk to disappear Mar 8 15:34:21.432: INFO: Pod pod-subpath-test-downwardapi-7dfk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7dfk Mar 8 15:34:21.432: INFO: Deleting pod "pod-subpath-test-downwardapi-7dfk" in namespace "subpath-5933" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:21.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5933" for this suite. • [SLOW TEST:22.218 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":128,"skipped":1806,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:21.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 15:34:21.542: INFO: Waiting up to 5m0s for pod "pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0" in namespace "emptydir-4235" to be "success or failure" Mar 8 15:34:21.558: INFO: Pod "pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119577ms Mar 8 15:34:23.562: INFO: Pod "pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020008079s Mar 8 15:34:25.566: INFO: Pod "pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023762697s STEP: Saw pod success Mar 8 15:34:25.566: INFO: Pod "pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0" satisfied condition "success or failure" Mar 8 15:34:25.569: INFO: Trying to get logs from node latest-worker pod pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0 container test-container: STEP: delete the pod Mar 8 15:34:25.609: INFO: Waiting for pod pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0 to disappear Mar 8 15:34:25.615: INFO: Pod pod-6bcecff6-a8b7-42e7-8570-842535b8f9c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:25.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4235" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":1820,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:25.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-f3060edf-4e6f-4a7e-a591-4cd489b73474 STEP: Creating a pod to test consume secrets Mar 8 15:34:25.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9" in namespace "projected-5028" to be "success or failure" Mar 8 15:34:25.696: INFO: Pod "pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.263399ms Mar 8 15:34:27.707: INFO: Pod "pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029971852s STEP: Saw pod success Mar 8 15:34:27.707: INFO: Pod "pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9" satisfied condition "success or failure" Mar 8 15:34:27.728: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9 container projected-secret-volume-test: STEP: delete the pod Mar 8 15:34:27.743: INFO: Waiting for pod pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9 to disappear Mar 8 15:34:27.759: INFO: Pod pod-projected-secrets-30218d8c-39f8-4fd1-ad59-414f14e80ff9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:27.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5028" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":1828,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:27.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Mar 8 15:34:29.962: INFO: Pod pod-hostip-bfd543d2-ce5d-48a6-ae64-b544c6b746e4 has hostIP: 172.17.0.16 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:29.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-640" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":131,"skipped":1830,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:30.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:30.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4301" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":132,"skipped":1831,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:30.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Mar 8 15:34:30.233: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:30.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7933" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":280,"completed":133,"skipped":1842,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:30.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:34:30.398: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:35.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3978" for this suite. • [SLOW TEST:5.011 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":134,"skipped":1857,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:35.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Mar 8 15:34:35.383: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 8 15:34:35.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:37.743: INFO: stderr: "" Mar 8 15:34:37.743: INFO: stdout: "service/agnhost-slave created\n" Mar 8 15:34:37.743: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 8 15:34:37.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:38.135: INFO: stderr: "" Mar 8 15:34:38.135: INFO: stdout: "service/agnhost-master created\n" Mar 8 15:34:38.135: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 15:34:38.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:38.420: INFO: stderr: "" Mar 8 15:34:38.420: INFO: stdout: "service/frontend created\n" Mar 8 15:34:38.420: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 8 15:34:38.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:38.673: INFO: stderr: "" Mar 8 15:34:38.673: INFO: stdout: "deployment.apps/frontend created\n" Mar 8 15:34:38.674: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 15:34:38.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:38.997: INFO: stderr: "" Mar 8 15:34:38.997: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 8 15:34:38.997: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 15:34:38.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1932' Mar 8 15:34:39.266: INFO: stderr: "" Mar 8 15:34:39.266: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 8 15:34:39.266: INFO: Waiting for all frontend pods to be Running. Mar 8 15:34:44.316: INFO: Waiting for frontend to serve content. Mar 8 15:34:44.344: INFO: Trying to add a new entry to the guestbook. Mar 8 15:34:44.353: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 15:34:44.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.542: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.542: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:34:44.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.654: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:34:44.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.749: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:34:44.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.824: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:34:44.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.893: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.893: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:34:44.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1932' Mar 8 15:34:44.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:34:44.965: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:34:44.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1932" for this suite. • [SLOW TEST:9.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":135,"skipped":1859,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:34:44.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5373 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-5373 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5373 Mar 8 15:34:45.105: INFO: Found 0 stateful pods, waiting for 1 Mar 8 15:34:55.109: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 8 15:34:55.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:34:55.374: INFO: stderr: "I0308 15:34:55.287185 1283 log.go:172] (0xc000b702c0) (0xc000b5a000) Create stream\nI0308 15:34:55.287235 1283 log.go:172] (0xc000b702c0) (0xc000b5a000) Stream added, broadcasting: 1\nI0308 15:34:55.290990 1283 log.go:172] (0xc000b702c0) Reply frame received for 1\nI0308 15:34:55.291029 1283 log.go:172] (0xc000b702c0) (0xc000a7e000) Create stream\nI0308 15:34:55.291039 1283 log.go:172] (0xc000b702c0) (0xc000a7e000) Stream added, broadcasting: 3\nI0308 15:34:55.295290 1283 log.go:172] (0xc000b702c0) Reply frame received for 3\nI0308 15:34:55.295322 1283 log.go:172] (0xc000b702c0) (0xc000b5a0a0) Create stream\nI0308 15:34:55.295330 1283 log.go:172] (0xc000b702c0) (0xc000b5a0a0) Stream added, broadcasting: 5\nI0308 15:34:55.296315 1283 log.go:172] (0xc000b702c0) Reply frame received for 5\nI0308 15:34:55.353083 1283 log.go:172] (0xc000b702c0) Data frame received for 5\nI0308 15:34:55.353108 1283 log.go:172] (0xc000b5a0a0) (5) Data frame handling\nI0308 15:34:55.353130 1283 log.go:172] (0xc000b5a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:34:55.369636 1283 log.go:172] (0xc000b702c0) Data frame received for 3\nI0308 15:34:55.369659 1283 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0308 15:34:55.369673 1283 log.go:172] (0xc000a7e000) (3) Data frame sent\nI0308 15:34:55.369679 1283 log.go:172] (0xc000b702c0) Data frame received for 3\nI0308 15:34:55.369683 1283 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0308 15:34:55.369739 1283 log.go:172] (0xc000b702c0) Data frame received for 5\nI0308 15:34:55.369762 1283 log.go:172] (0xc000b5a0a0) (5) Data frame handling\nI0308 15:34:55.371030 1283 log.go:172] (0xc000b702c0) Data frame received for 1\nI0308 15:34:55.371045 1283 log.go:172] (0xc000b5a000) (1) Data frame handling\nI0308 15:34:55.371060 1283 log.go:172] (0xc000b5a000) (1) Data frame sent\nI0308 15:34:55.371073 1283 log.go:172] (0xc000b702c0) (0xc000b5a000) Stream removed, broadcasting: 1\nI0308 15:34:55.371157 1283 log.go:172] (0xc000b702c0) Go away received\nI0308 15:34:55.371941 1283 log.go:172] (0xc000b702c0) (0xc000b5a000) Stream removed, broadcasting: 1\nI0308 15:34:55.371972 1283 log.go:172] (0xc000b702c0) (0xc000a7e000) Stream removed, broadcasting: 3\nI0308 15:34:55.371991 1283 log.go:172] (0xc000b702c0) (0xc000b5a0a0) Stream removed, broadcasting: 5\n" Mar 8 15:34:55.375: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:34:55.375: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:34:55.378: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 15:35:05.383: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:35:05.383: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:35:05.401: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 15:35:05.401: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC }] Mar 8 15:35:05.402: INFO: Mar 8 15:35:05.402: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 8 15:35:06.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99200103s Mar 8 15:35:07.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987148744s Mar 8 15:35:08.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982508558s Mar 8 15:35:09.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977417711s Mar 8 15:35:10.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970109529s Mar 8 15:35:11.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966165813s Mar 8 15:35:12.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961443996s Mar 8 15:35:13.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957537316s Mar 8 15:35:14.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.373917ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5373 Mar 8 15:35:15.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:35:15.679: INFO: stderr: "I0308 15:35:15.602731 1303 log.go:172] (0xc00003b600) (0xc00087e000) Create stream\nI0308 15:35:15.602794 1303 log.go:172] (0xc00003b600) (0xc00087e000) Stream added, broadcasting: 1\nI0308 15:35:15.605471 1303 log.go:172] (0xc00003b600) Reply frame received for 1\nI0308 15:35:15.605517 1303 log.go:172] (0xc00003b600) (0xc000623ae0) Create stream\nI0308 15:35:15.605530 1303 log.go:172] (0xc00003b600) (0xc000623ae0) Stream added, broadcasting: 3\nI0308 15:35:15.606530 1303 log.go:172] (0xc00003b600) Reply frame received for 3\nI0308 15:35:15.606567 1303 log.go:172] (0xc00003b600) (0xc00020a000) Create stream\nI0308 15:35:15.606578 1303 log.go:172] (0xc00003b600) (0xc00020a000) Stream added, broadcasting: 5\nI0308 15:35:15.607705 1303 log.go:172] (0xc00003b600) Reply frame received for 5\nI0308 15:35:15.674221 1303 log.go:172] (0xc00003b600) Data frame received for 3\nI0308 15:35:15.674263 1303 log.go:172] (0xc000623ae0) (3) Data frame handling\nI0308 15:35:15.674273 1303 log.go:172] (0xc000623ae0) (3) Data frame sent\nI0308 15:35:15.674280 1303 log.go:172] (0xc00003b600) Data frame received for 3\nI0308 15:35:15.674287 1303 log.go:172] (0xc000623ae0) (3) Data frame handling\nI0308 15:35:15.674309 1303 log.go:172] (0xc00003b600) Data frame received for 5\nI0308 15:35:15.674335 1303 log.go:172] (0xc00020a000) (5) Data frame handling\nI0308 15:35:15.674367 1303 log.go:172] (0xc00020a000) (5) Data frame sent\nI0308 15:35:15.674382 1303 log.go:172] (0xc00003b600) Data frame received for 5\nI0308 15:35:15.674390 1303 log.go:172] (0xc00020a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:35:15.675654 1303 log.go:172] (0xc00003b600) Data frame received for 1\nI0308 15:35:15.675675 1303 log.go:172] (0xc00087e000) (1) Data frame handling\nI0308 15:35:15.675690 1303 log.go:172] (0xc00087e000) (1) Data frame sent\nI0308 15:35:15.675790 1303 log.go:172] (0xc00003b600) (0xc00087e000) Stream removed, broadcasting: 1\nI0308 15:35:15.675820 1303 log.go:172] (0xc00003b600) Go away received\nI0308 15:35:15.676220 1303 log.go:172] (0xc00003b600) (0xc00087e000) Stream removed, broadcasting: 1\nI0308 15:35:15.676245 1303 log.go:172] (0xc00003b600) (0xc000623ae0) Stream removed, broadcasting: 3\nI0308 15:35:15.676257 1303 log.go:172] (0xc00003b600) (0xc00020a000) Stream removed, broadcasting: 5\n" Mar 8 15:35:15.679: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:35:15.679: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:35:15.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:35:15.911: INFO: stderr: "I0308 15:35:15.848907 1325 log.go:172] (0xc00003b810) (0xc00089e000) Create stream\nI0308 15:35:15.848955 1325 log.go:172] (0xc00003b810) (0xc00089e000) Stream added, broadcasting: 1\nI0308 15:35:15.851300 1325 log.go:172] (0xc00003b810) Reply frame received for 1\nI0308 15:35:15.851333 1325 log.go:172] (0xc00003b810) (0xc000930000) Create stream\nI0308 15:35:15.851343 1325 log.go:172] (0xc00003b810) (0xc000930000) Stream added, broadcasting: 3\nI0308 15:35:15.852172 1325 log.go:172] (0xc00003b810) Reply frame received for 3\nI0308 15:35:15.852199 1325 log.go:172] (0xc00003b810) (0xc000653b80) Create stream\nI0308 15:35:15.852208 1325 log.go:172] (0xc00003b810) (0xc000653b80) Stream added, broadcasting: 5\nI0308 15:35:15.852868 1325 log.go:172] (0xc00003b810) Reply frame received for 5\nI0308 15:35:15.907714 1325 log.go:172] (0xc00003b810) Data frame received for 5\nI0308 15:35:15.907738 1325 log.go:172] (0xc000653b80) (5) Data frame handling\nI0308 15:35:15.907751 1325 log.go:172] (0xc000653b80) (5) Data frame sent\nI0308 15:35:15.907760 1325 log.go:172] (0xc00003b810) Data frame received for 5\nI0308 15:35:15.907767 1325 log.go:172] (0xc000653b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 15:35:15.907783 1325 log.go:172] (0xc00003b810) Data frame received for 3\nI0308 15:35:15.907797 1325 log.go:172] (0xc000930000) (3) Data frame handling\nI0308 15:35:15.907807 1325 log.go:172] (0xc000930000) (3) Data frame sent\nI0308 15:35:15.907820 1325 log.go:172] (0xc00003b810) Data frame received for 3\nI0308 15:35:15.907824 1325 log.go:172] (0xc000930000) (3) Data frame handling\nI0308 15:35:15.907858 1325 log.go:172] (0xc00003b810) Data frame received for 1\nI0308 15:35:15.907911 1325 log.go:172] (0xc00089e000) (1) Data frame handling\nI0308 15:35:15.907930 1325 log.go:172] (0xc00089e000) (1) Data frame sent\nI0308 15:35:15.907948 1325 log.go:172] (0xc00003b810) (0xc00089e000) Stream removed, broadcasting: 1\nI0308 15:35:15.907966 1325 log.go:172] (0xc00003b810) Go away received\nI0308 15:35:15.908322 1325 log.go:172] (0xc00003b810) (0xc00089e000) Stream removed, broadcasting: 1\nI0308 15:35:15.908348 1325 log.go:172] (0xc00003b810) (0xc000930000) Stream removed, broadcasting: 3\nI0308 15:35:15.908365 1325 log.go:172] (0xc00003b810) (0xc000653b80) Stream removed, broadcasting: 5\n" Mar 8 15:35:15.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:35:15.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:35:15.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:35:16.110: INFO: stderr: "I0308 15:35:16.032374 1348 log.go:172] (0xc00003af20) (0xc0005bc1e0) Create stream\nI0308 15:35:16.032419 1348 log.go:172] (0xc00003af20) (0xc0005bc1e0) Stream added, broadcasting: 1\nI0308 15:35:16.034740 1348 log.go:172] (0xc00003af20) Reply frame received for 1\nI0308 15:35:16.034792 1348 log.go:172] (0xc00003af20) (0xc0005dfc20) Create stream\nI0308 15:35:16.034811 1348 log.go:172] (0xc00003af20) (0xc0005dfc20) Stream added, broadcasting: 3\nI0308 15:35:16.035581 1348 log.go:172] (0xc00003af20) Reply frame received for 3\nI0308 15:35:16.035608 1348 log.go:172] (0xc00003af20) (0xc0005dfe00) Create stream\nI0308 15:35:16.035619 1348 log.go:172] (0xc00003af20) (0xc0005dfe00) Stream added, broadcasting: 5\nI0308 15:35:16.036285 1348 log.go:172] (0xc00003af20) Reply frame received for 5\nI0308 15:35:16.102025 1348 log.go:172] (0xc00003af20) Data frame received for 5\nI0308 15:35:16.102054 1348 log.go:172] (0xc0005dfe00) (5) Data frame handling\nI0308 15:35:16.102066 1348 log.go:172] (0xc0005dfe00) (5) Data frame sent\nI0308 15:35:16.102074 1348 log.go:172] (0xc00003af20) Data frame received for 5\nI0308 15:35:16.102081 1348 log.go:172] (0xc0005dfe00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 15:35:16.102100 1348 log.go:172] (0xc00003af20) Data frame received for 3\nI0308 15:35:16.102112 1348 log.go:172] (0xc0005dfc20) (3) Data frame handling\nI0308 15:35:16.102155 1348 log.go:172] (0xc0005dfc20) (3) Data frame sent\nI0308 15:35:16.102165 1348 log.go:172] (0xc00003af20) Data frame received for 3\nI0308 15:35:16.102173 1348 log.go:172] (0xc0005dfc20) (3) Data frame handling\nI0308 15:35:16.107691 1348 log.go:172] (0xc00003af20) Data frame received for 1\nI0308 15:35:16.107712 1348 log.go:172] (0xc0005bc1e0) (1) Data frame handling\nI0308 15:35:16.107725 1348 log.go:172] (0xc0005bc1e0) (1) Data frame sent\nI0308 15:35:16.107735 1348 log.go:172] (0xc00003af20) (0xc0005bc1e0) Stream removed, broadcasting: 1\nI0308 15:35:16.107772 1348 log.go:172] (0xc00003af20) Go away received\nI0308 15:35:16.107981 1348 log.go:172] (0xc00003af20) (0xc0005bc1e0) Stream removed, broadcasting: 1\nI0308 15:35:16.107995 1348 log.go:172] (0xc00003af20) (0xc0005dfc20) Stream removed, broadcasting: 3\nI0308 15:35:16.108001 1348 log.go:172] (0xc00003af20) (0xc0005dfe00) Stream removed, broadcasting: 5\n" Mar 8 15:35:16.110: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:35:16.110: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:35:16.113: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 8 15:35:26.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:35:26.119: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:35:26.119: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 8 15:35:26.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:35:26.319: INFO: stderr: "I0308 15:35:26.256647 1370 log.go:172] (0xc00098d600) (0xc00094c780) Create stream\nI0308 15:35:26.256699 1370 log.go:172] (0xc00098d600) (0xc00094c780) Stream added, broadcasting: 1\nI0308 15:35:26.259666 1370 log.go:172] (0xc00098d600) Reply frame received for 1\nI0308 15:35:26.259698 1370 log.go:172] (0xc00098d600) (0xc0005d26e0) Create stream\nI0308 15:35:26.259705 1370 log.go:172] (0xc00098d600) (0xc0005d26e0) Stream added, broadcasting: 3\nI0308 15:35:26.260426 1370 log.go:172] (0xc00098d600) Reply frame received for 3\nI0308 15:35:26.260447 1370 log.go:172] (0xc00098d600) (0xc0003cf360) Create stream\nI0308 15:35:26.260454 1370 log.go:172] (0xc00098d600) (0xc0003cf360) Stream added, broadcasting: 5\nI0308 15:35:26.261153 1370 log.go:172] (0xc00098d600) Reply frame received for 5\nI0308 15:35:26.315414 1370 log.go:172] (0xc00098d600) Data frame received for 3\nI0308 15:35:26.315443 1370 log.go:172] (0xc0005d26e0) (3) Data frame handling\nI0308 15:35:26.315451 1370 log.go:172] (0xc0005d26e0) (3) Data frame sent\nI0308 15:35:26.315456 1370 log.go:172] (0xc00098d600) Data frame received for 3\nI0308 15:35:26.315462 1370 log.go:172] (0xc0005d26e0) (3) Data frame handling\nI0308 15:35:26.315481 1370 log.go:172] (0xc00098d600) Data frame received for 5\nI0308 15:35:26.315486 1370 log.go:172] (0xc0003cf360) (5) Data frame handling\nI0308 15:35:26.315492 1370 log.go:172] (0xc0003cf360) (5) Data frame sent\nI0308 15:35:26.315497 1370 log.go:172] (0xc00098d600) Data frame received for 5\nI0308 15:35:26.315502 1370 log.go:172] (0xc0003cf360) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:35:26.316348 1370 log.go:172] (0xc00098d600) Data frame received for 1\nI0308 15:35:26.316367 1370 log.go:172] (0xc00094c780) (1) Data frame handling\nI0308 15:35:26.316375 1370 log.go:172] (0xc00094c780) (1) Data frame sent\nI0308 15:35:26.316384 1370 log.go:172] (0xc00098d600) (0xc00094c780) Stream removed, broadcasting: 1\nI0308 15:35:26.316394 1370 log.go:172] (0xc00098d600) Go away received\nI0308 15:35:26.316606 1370 log.go:172] (0xc00098d600) (0xc00094c780) Stream removed, broadcasting: 1\nI0308 15:35:26.316618 1370 log.go:172] (0xc00098d600) (0xc0005d26e0) Stream removed, broadcasting: 3\nI0308 15:35:26.316623 1370 log.go:172] (0xc00098d600) (0xc0003cf360) Stream removed, broadcasting: 5\n" Mar 8 15:35:26.319: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:35:26.319: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:35:26.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:35:26.488: INFO: stderr: "I0308 15:35:26.415344 1390 log.go:172] (0xc000957130) (0xc000663e00) Create stream\nI0308 15:35:26.415381 1390 log.go:172] (0xc000957130) (0xc000663e00) Stream added, broadcasting: 1\nI0308 15:35:26.418265 1390 log.go:172] (0xc000957130) Reply frame received for 1\nI0308 15:35:26.418568 1390 log.go:172] (0xc000957130) (0xc000663ae0) Create stream\nI0308 15:35:26.418577 1390 log.go:172] (0xc000957130) (0xc000663ae0) Stream added, broadcasting: 3\nI0308 15:35:26.419219 1390 log.go:172] (0xc000957130) Reply frame received for 3\nI0308 15:35:26.419241 1390 log.go:172] (0xc000957130) (0xc0006fb360) Create stream\nI0308 15:35:26.419249 1390 log.go:172] (0xc000957130) (0xc0006fb360) Stream added, broadcasting: 5\nI0308 15:35:26.419787 1390 log.go:172] (0xc000957130) Reply frame received for 5\nI0308 15:35:26.467612 1390 log.go:172] (0xc000957130) Data frame received for 5\nI0308 15:35:26.467639 1390 log.go:172] (0xc0006fb360) (5) Data frame handling\nI0308 15:35:26.467658 1390 log.go:172] (0xc0006fb360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:35:26.483718 1390 log.go:172] (0xc000957130) Data frame received for 5\nI0308 15:35:26.483735 1390 log.go:172] (0xc0006fb360) (5) Data frame handling\nI0308 15:35:26.483759 1390 log.go:172] (0xc000957130) Data frame received for 3\nI0308 15:35:26.483790 1390 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0308 15:35:26.483810 1390 log.go:172] (0xc000663ae0) (3) Data frame sent\nI0308 15:35:26.484859 1390 log.go:172] (0xc000957130) Data frame received for 3\nI0308 15:35:26.484876 1390 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0308 15:35:26.485102 1390 log.go:172] (0xc000957130) Data frame received for 1\nI0308 15:35:26.485113 1390 log.go:172] (0xc000663e00) (1) Data frame handling\nI0308 15:35:26.485132 1390 log.go:172] (0xc000663e00) (1) Data frame sent\nI0308 15:35:26.485264 1390 log.go:172] (0xc000957130) (0xc000663e00) Stream removed, broadcasting: 1\nI0308 15:35:26.485306 1390 log.go:172] (0xc000957130) Go away received\nI0308 15:35:26.485563 1390 log.go:172] (0xc000957130) (0xc000663e00) Stream removed, broadcasting: 1\nI0308 15:35:26.485580 1390 log.go:172] (0xc000957130) (0xc000663ae0) Stream removed, broadcasting: 3\nI0308 15:35:26.485587 1390 log.go:172] (0xc000957130) (0xc0006fb360) Stream removed, broadcasting: 5\n" Mar 8 15:35:26.488: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:35:26.488: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:35:26.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5373 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:35:26.711: INFO: stderr: "I0308 15:35:26.590069 1413 log.go:172] (0xc0009ab1e0) (0xc000974780) Create stream\nI0308 15:35:26.590112 1413 log.go:172] (0xc0009ab1e0) (0xc000974780) Stream added, broadcasting: 1\nI0308 15:35:26.594204 1413 log.go:172] (0xc0009ab1e0) Reply frame received for 1\nI0308 15:35:26.594239 1413 log.go:172] (0xc0009ab1e0) (0xc0005dc6e0) Create stream\nI0308 15:35:26.594256 1413 log.go:172] (0xc0009ab1e0) (0xc0005dc6e0) Stream added, broadcasting: 3\nI0308 15:35:26.595010 1413 log.go:172] (0xc0009ab1e0) Reply frame received for 3\nI0308 15:35:26.595032 1413 log.go:172] (0xc0009ab1e0) (0xc0005bd400) Create stream\nI0308 15:35:26.595038 1413 log.go:172] (0xc0009ab1e0) (0xc0005bd400) Stream added, broadcasting: 5\nI0308 15:35:26.595674 1413 log.go:172] (0xc0009ab1e0) Reply frame received for 5\nI0308 15:35:26.672552 1413 log.go:172] (0xc0009ab1e0) Data frame received for 5\nI0308 15:35:26.672573 1413 log.go:172] (0xc0005bd400) (5) Data frame handling\nI0308 15:35:26.672585 1413 log.go:172] (0xc0005bd400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:35:26.705908 1413 log.go:172] (0xc0009ab1e0) Data frame received for 3\nI0308 15:35:26.705938 1413 log.go:172] (0xc0005dc6e0) (3) Data frame handling\nI0308 15:35:26.705956 1413 log.go:172] (0xc0005dc6e0) (3) Data frame sent\nI0308 15:35:26.706472 1413 log.go:172] (0xc0009ab1e0) Data frame received for 3\nI0308 15:35:26.706504 1413 log.go:172] (0xc0005dc6e0) (3) Data frame handling\nI0308 15:35:26.706719 1413 log.go:172] (0xc0009ab1e0) Data frame received for 5\nI0308 15:35:26.706738 1413 log.go:172] (0xc0005bd400) (5) Data frame handling\nI0308 15:35:26.708164 1413 log.go:172] (0xc0009ab1e0) Data frame received for 1\nI0308 15:35:26.708185 1413 log.go:172] (0xc000974780) (1) Data frame handling\nI0308 15:35:26.708197 1413 log.go:172] (0xc000974780) (1) Data frame sent\nI0308 15:35:26.708213 1413 log.go:172] (0xc0009ab1e0) (0xc000974780) Stream removed, broadcasting: 1\nI0308 15:35:26.708238 1413 log.go:172] (0xc0009ab1e0) Go away received\nI0308 15:35:26.708563 1413 log.go:172] (0xc0009ab1e0) (0xc000974780) Stream removed, broadcasting: 1\nI0308 15:35:26.708588 1413 log.go:172] (0xc0009ab1e0) (0xc0005dc6e0) Stream removed, broadcasting: 3\nI0308 15:35:26.708600 1413 log.go:172] (0xc0009ab1e0) (0xc0005bd400) Stream removed, broadcasting: 5\n" Mar 8 15:35:26.711: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:35:26.711: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:35:26.711: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:35:26.715: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 8 15:35:36.723: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:35:36.723: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:35:36.723: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:35:36.740: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 15:35:36.740: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC }] Mar 8 15:35:36.740: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:36.740: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:36.740: INFO: Mar 8 15:35:36.740: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 15:35:37.744: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 15:35:37.744: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:34:45 +0000 UTC }] Mar 8 15:35:37.744: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:37.744: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:37.744: INFO: Mar 8 15:35:37.744: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 15:35:38.777: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 15:35:38.777: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:38.777: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:35:05 +0000 UTC }] Mar 8 15:35:38.777: INFO: Mar 8 15:35:38.777: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 8 15:35:39.789: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.956623325s Mar 8 15:35:40.792: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.945025619s Mar 8 15:35:41.796: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.941302127s Mar 8 15:35:42.800: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.938093252s Mar 8 15:35:43.803: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.933802381s Mar 8 15:35:44.806: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.930732297s Mar 8 15:35:45.855: INFO: Verifying statefulset ss doesn't scale past 0 for another 927.736924ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5373 Mar 8 15:35:46.859: INFO: Scaling statefulset ss to 0 Mar 8 15:35:46.867: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 15:35:46.869: INFO: Deleting all statefulset in ns statefulset-5373 Mar 8 15:35:46.871: INFO: Scaling statefulset ss to 0 Mar 8 15:35:46.878: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:35:46.880: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:35:46.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5373" for this suite. • [SLOW TEST:61.909 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":136,"skipped":1879,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:35:46.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-js8h STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:35:47.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-js8h" in namespace "subpath-3782" to be "success or failure" Mar 8 15:35:47.057: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.436661ms Mar 8 15:35:49.071: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 2.028892224s Mar 8 15:35:51.074: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 4.032555328s Mar 8 15:35:53.078: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 6.03573611s Mar 8 15:35:55.082: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 8.039696393s Mar 8 15:35:57.085: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 10.043081497s Mar 8 15:35:59.107: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 12.064879918s Mar 8 15:36:01.111: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 14.068658296s Mar 8 15:36:03.113: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 16.071289126s Mar 8 15:36:05.117: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 18.075146881s Mar 8 15:36:07.122: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Running", Reason="", readiness=true. Elapsed: 20.079718401s Mar 8 15:36:09.197: INFO: Pod "pod-subpath-test-configmap-js8h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.15467081s STEP: Saw pod success Mar 8 15:36:09.197: INFO: Pod "pod-subpath-test-configmap-js8h" satisfied condition "success or failure" Mar 8 15:36:09.199: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-js8h container test-container-subpath-configmap-js8h: STEP: delete the pod Mar 8 15:36:09.236: INFO: Waiting for pod pod-subpath-test-configmap-js8h to disappear Mar 8 15:36:09.255: INFO: Pod pod-subpath-test-configmap-js8h no longer exists STEP: Deleting pod pod-subpath-test-configmap-js8h Mar 8 15:36:09.255: INFO: Deleting pod "pod-subpath-test-configmap-js8h" in namespace "subpath-3782" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:09.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3782" for this suite. • [SLOW TEST:22.362 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":137,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:09.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6398 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6398 I0308 15:36:09.513442 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6398, replica count: 2 I0308 15:36:12.563874 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 15:36:12.563: INFO: Creating new exec pod Mar 8 15:36:17.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6398 execpod7kqqm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 15:36:17.811: INFO: stderr: "I0308 15:36:17.727723 1433 log.go:172] (0xc0009a2000) (0xc0006d1ae0) Create stream\nI0308 15:36:17.727767 1433 log.go:172] (0xc0009a2000) (0xc0006d1ae0) Stream added, broadcasting: 1\nI0308 15:36:17.729703 1433 log.go:172] (0xc0009a2000) Reply frame received for 1\nI0308 15:36:17.729735 1433 log.go:172] (0xc0009a2000) (0xc000932000) Create stream\nI0308 15:36:17.729746 1433 log.go:172] (0xc0009a2000) (0xc000932000) Stream added, broadcasting: 3\nI0308 15:36:17.730623 1433 log.go:172] (0xc0009a2000) Reply frame received for 3\nI0308 15:36:17.730647 1433 log.go:172] (0xc0009a2000) (0xc000026000) Create stream\nI0308 15:36:17.730660 1433 log.go:172] (0xc0009a2000) (0xc000026000) Stream added, broadcasting: 5\nI0308 15:36:17.731413 1433 log.go:172] (0xc0009a2000) Reply frame received for 5\nI0308 15:36:17.804582 1433 log.go:172] (0xc0009a2000) Data frame received for 5\nI0308 15:36:17.804605 1433 log.go:172] (0xc000026000) (5) Data frame handling\nI0308 15:36:17.804620 1433 log.go:172] (0xc000026000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 15:36:17.805913 1433 log.go:172] (0xc0009a2000) Data frame received for 3\nI0308 15:36:17.805933 1433 log.go:172] (0xc000932000) (3) Data frame handling\nI0308 15:36:17.805971 1433 log.go:172] (0xc0009a2000) Data frame received for 5\nI0308 15:36:17.805993 1433 log.go:172] (0xc000026000) (5) Data frame handling\nI0308 15:36:17.806005 1433 log.go:172] (0xc000026000) (5) Data frame sent\nI0308 15:36:17.806013 1433 log.go:172] (0xc0009a2000) Data frame received for 5\nI0308 15:36:17.806020 1433 log.go:172] (0xc000026000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 15:36:17.807808 1433 log.go:172] (0xc0009a2000) Data frame received for 1\nI0308 15:36:17.807838 1433 log.go:172] (0xc0006d1ae0) (1) Data frame handling\nI0308 15:36:17.807852 1433 log.go:172] (0xc0006d1ae0) (1) Data frame sent\nI0308 15:36:17.807871 1433 log.go:172] (0xc0009a2000) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0308 15:36:17.807891 1433 log.go:172] (0xc0009a2000) Go away received\nI0308 15:36:17.808255 1433 log.go:172] (0xc0009a2000) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0308 15:36:17.808275 1433 log.go:172] (0xc0009a2000) (0xc000932000) Stream removed, broadcasting: 3\nI0308 15:36:17.808283 1433 log.go:172] (0xc0009a2000) (0xc000026000) Stream removed, broadcasting: 5\n" Mar 8 15:36:17.811: INFO: stdout: "" Mar 8 15:36:17.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6398 execpod7kqqm -- /bin/sh -x -c nc -zv -t -w 2 10.96.183.5 80' Mar 8 15:36:17.992: INFO: stderr: "I0308 15:36:17.924021 1454 log.go:172] (0xc0000e8420) (0xc0009e60a0) Create stream\nI0308 15:36:17.924065 1454 log.go:172] (0xc0000e8420) (0xc0009e60a0) Stream added, broadcasting: 1\nI0308 15:36:17.929917 1454 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0308 15:36:17.929970 1454 log.go:172] (0xc0000e8420) (0xc0009fc000) Create stream\nI0308 15:36:17.929984 1454 log.go:172] (0xc0000e8420) (0xc0009fc000) Stream added, broadcasting: 3\nI0308 15:36:17.931002 1454 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0308 15:36:17.931033 1454 log.go:172] (0xc0000e8420) (0xc000403360) Create stream\nI0308 15:36:17.931044 1454 log.go:172] (0xc0000e8420) (0xc000403360) Stream added, broadcasting: 5\nI0308 15:36:17.931820 1454 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0308 15:36:17.988133 1454 log.go:172] (0xc0000e8420) Data frame received for 5\nI0308 15:36:17.988170 1454 log.go:172] (0xc000403360) (5) Data frame handling\nI0308 15:36:17.988180 1454 log.go:172] (0xc000403360) (5) Data frame sent\nI0308 15:36:17.988187 1454 log.go:172] (0xc0000e8420) Data frame received for 5\nI0308 15:36:17.988193 1454 log.go:172] (0xc000403360) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.183.5 80\nConnection to 10.96.183.5 80 port [tcp/http] succeeded!\nI0308 15:36:17.988214 1454 log.go:172] (0xc0000e8420) Data frame received for 3\nI0308 15:36:17.988225 1454 log.go:172] (0xc0009fc000) (3) Data frame handling\nI0308 15:36:17.989614 1454 log.go:172] (0xc0000e8420) Data frame received for 1\nI0308 15:36:17.989719 1454 log.go:172] (0xc0009e60a0) (1) Data frame handling\nI0308 15:36:17.989753 1454 log.go:172] (0xc0009e60a0) (1) Data frame sent\nI0308 15:36:17.989775 1454 log.go:172] (0xc0000e8420) (0xc0009e60a0) Stream removed, broadcasting: 1\nI0308 15:36:17.989797 1454 log.go:172] (0xc0000e8420) Go away received\nI0308 15:36:17.990103 1454 log.go:172] (0xc0000e8420) (0xc0009e60a0) Stream removed, broadcasting: 1\nI0308 15:36:17.990154 1454 log.go:172] (0xc0000e8420) (0xc0009fc000) Stream removed, broadcasting: 3\nI0308 15:36:17.990168 1454 log.go:172] (0xc0000e8420) (0xc000403360) Stream removed, broadcasting: 5\n" Mar 8 15:36:17.992: INFO: stdout: "" Mar 8 15:36:17.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6398 execpod7kqqm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31190' Mar 8 15:36:18.165: INFO: stderr: "I0308 15:36:18.097155 1475 log.go:172] (0xc000754630) (0xc000ac2280) Create stream\nI0308 15:36:18.097190 1475 log.go:172] (0xc000754630) (0xc000ac2280) Stream added, broadcasting: 1\nI0308 15:36:18.098673 1475 log.go:172] (0xc000754630) Reply frame received for 1\nI0308 15:36:18.098694 1475 log.go:172] (0xc000754630) (0xc000636820) Create stream\nI0308 15:36:18.098702 1475 log.go:172] (0xc000754630) (0xc000636820) Stream added, broadcasting: 3\nI0308 15:36:18.099325 1475 log.go:172] (0xc000754630) Reply frame received for 3\nI0308 15:36:18.099344 1475 log.go:172] (0xc000754630) (0xc0001f74a0) Create stream\nI0308 15:36:18.099350 1475 log.go:172] (0xc000754630) (0xc0001f74a0) Stream added, broadcasting: 5\nI0308 15:36:18.099964 1475 log.go:172] (0xc000754630) Reply frame received for 5\nI0308 15:36:18.160304 1475 log.go:172] (0xc000754630) Data frame received for 5\nI0308 15:36:18.160326 1475 log.go:172] (0xc0001f74a0) (5) Data frame handling\nI0308 15:36:18.160338 1475 log.go:172] (0xc0001f74a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.16 31190\nI0308 15:36:18.161224 1475 log.go:172] (0xc000754630) Data frame received for 3\nI0308 15:36:18.161260 1475 log.go:172] (0xc000636820) (3) Data frame handling\nI0308 15:36:18.161280 1475 log.go:172] (0xc000754630) Data frame received for 5\nI0308 15:36:18.161290 1475 log.go:172] (0xc0001f74a0) (5) Data frame handling\nI0308 15:36:18.161298 1475 log.go:172] (0xc0001f74a0) (5) Data frame sent\nI0308 15:36:18.161308 1475 log.go:172] (0xc000754630) Data frame received for 5\nI0308 15:36:18.161315 1475 log.go:172] (0xc0001f74a0) (5) Data frame handling\nConnection to 172.17.0.16 31190 port [tcp/31190] succeeded!\nI0308 15:36:18.162797 1475 log.go:172] (0xc000754630) Data frame received for 1\nI0308 15:36:18.162816 1475 log.go:172] (0xc000ac2280) (1) Data frame handling\nI0308 15:36:18.162830 1475 log.go:172] (0xc000ac2280) (1) Data frame sent\nI0308 15:36:18.162846 1475 log.go:172] (0xc000754630) (0xc000ac2280) Stream removed, broadcasting: 1\nI0308 15:36:18.162865 1475 log.go:172] (0xc000754630) Go away received\nI0308 15:36:18.163269 1475 log.go:172] (0xc000754630) (0xc000ac2280) Stream removed, broadcasting: 1\nI0308 15:36:18.163289 1475 log.go:172] (0xc000754630) (0xc000636820) Stream removed, broadcasting: 3\nI0308 15:36:18.163298 1475 log.go:172] (0xc000754630) (0xc0001f74a0) Stream removed, broadcasting: 5\n" Mar 8 15:36:18.165: INFO: stdout: "" Mar 8 15:36:18.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6398 execpod7kqqm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31190' Mar 8 15:36:18.333: INFO: stderr: "I0308 15:36:18.266405 1495 log.go:172] (0xc000a25080) (0xc000b466e0) Create stream\nI0308 15:36:18.266441 1495 log.go:172] (0xc000a25080) (0xc000b466e0) Stream added, broadcasting: 1\nI0308 15:36:18.269300 1495 log.go:172] (0xc000a25080) Reply frame received for 1\nI0308 15:36:18.269325 1495 log.go:172] (0xc000a25080) (0xc000624780) Create stream\nI0308 15:36:18.269331 1495 log.go:172] (0xc000a25080) (0xc000624780) Stream added, broadcasting: 3\nI0308 15:36:18.269884 1495 log.go:172] (0xc000a25080) Reply frame received for 3\nI0308 15:36:18.269918 1495 log.go:172] (0xc000a25080) (0xc000433400) Create stream\nI0308 15:36:18.269929 1495 log.go:172] (0xc000a25080) (0xc000433400) Stream added, broadcasting: 5\nI0308 15:36:18.270828 1495 log.go:172] (0xc000a25080) Reply frame received for 5\nI0308 15:36:18.327467 1495 log.go:172] (0xc000a25080) Data frame received for 5\nI0308 15:36:18.327502 1495 log.go:172] (0xc000433400) (5) Data frame handling\nI0308 15:36:18.327515 1495 log.go:172] (0xc000433400) (5) Data frame sent\nI0308 15:36:18.327525 1495 log.go:172] (0xc000a25080) Data frame received for 5\nI0308 15:36:18.327533 1495 log.go:172] (0xc000433400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31190\nConnection to 172.17.0.18 31190 port [tcp/31190] succeeded!\nI0308 15:36:18.327546 1495 log.go:172] (0xc000a25080) Data frame received for 3\nI0308 15:36:18.327635 1495 log.go:172] (0xc000624780) (3) Data frame handling\nI0308 15:36:18.329004 1495 log.go:172] (0xc000a25080) Data frame received for 1\nI0308 15:36:18.329032 1495 log.go:172] (0xc000b466e0) (1) Data frame handling\nI0308 15:36:18.329071 1495 log.go:172] (0xc000b466e0) (1) Data frame sent\nI0308 15:36:18.329092 1495 log.go:172] (0xc000a25080) (0xc000b466e0) Stream removed, broadcasting: 1\nI0308 15:36:18.329116 1495 log.go:172] (0xc000a25080) Go away received\nI0308 15:36:18.329628 1495 log.go:172] (0xc000a25080) (0xc000b466e0) Stream removed, broadcasting: 1\nI0308 15:36:18.329654 1495 log.go:172] (0xc000a25080) (0xc000624780) Stream removed, broadcasting: 3\nI0308 15:36:18.329668 1495 log.go:172] (0xc000a25080) (0xc000433400) Stream removed, broadcasting: 5\n" Mar 8 15:36:18.333: INFO: stdout: "" Mar 8 15:36:18.333: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:18.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6398" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.122 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":138,"skipped":1899,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:18.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 8 15:36:18.432: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:21.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1004" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":139,"skipped":1916,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:21.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 15:36:21.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2896' Mar 8 15:36:21.948: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:36:21.948: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604 Mar 8 15:36:23.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2896' Mar 8 15:36:24.077: INFO: stderr: "" Mar 8 15:36:24.078: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:24.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2896" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":280,"completed":140,"skipped":1935,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:24.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-39c2f910-bf33-41d4-b8b3-0af2d7ae67fd STEP: Creating a pod to test consume configMaps Mar 8 15:36:24.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67" in namespace "configmap-8932" to be "success or failure" Mar 8 15:36:24.157: INFO: Pod "pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506954ms Mar 8 15:36:26.160: INFO: Pod "pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007918784s STEP: Saw pod success Mar 8 15:36:26.160: INFO: Pod "pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67" satisfied condition "success or failure" Mar 8 15:36:26.162: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67 container configmap-volume-test: STEP: delete the pod Mar 8 15:36:26.182: INFO: Waiting for pod pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67 to disappear Mar 8 15:36:26.187: INFO: Pod pod-configmaps-3497e668-e7f4-4b24-9c7e-a02413477a67 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:26.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8932" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":1940,"failed":0} SSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1045" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":142,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:26.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:26.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7219" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":143,"skipped":1964,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:26.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-1e7ed6e7-42db-4240-88f1-b83118189abe STEP: Creating a pod to test consume secrets Mar 8 15:36:26.452: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1" in namespace "projected-6415" to be "success or failure" Mar 8 15:36:26.456: INFO: Pod "pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393414ms Mar 8 15:36:28.460: INFO: Pod "pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007869257s STEP: Saw pod success Mar 8 15:36:28.460: INFO: Pod "pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1" satisfied condition "success or failure" Mar 8 15:36:28.462: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1 container projected-secret-volume-test: STEP: delete the pod Mar 8 15:36:28.480: INFO: Waiting for pod pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1 to disappear Mar 8 15:36:28.508: INFO: Pod pod-projected-secrets-1e6e5a4d-f44f-4a2c-9548-1567daf302a1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:28.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6415" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":1978,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:28.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Mar 8 15:36:28.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions' Mar 8 15:36:28.863: INFO: stderr: "" Mar 8 15:36:28.863: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:36:28.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9922" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":145,"skipped":1985,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:36:28.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5318 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 8 15:36:28.951: INFO: Found 0 stateful pods, waiting for 3 Mar 8 15:36:38.955: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:36:38.955: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:36:38.955: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:36:38.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5318 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:36:39.210: INFO: stderr: "I0308 15:36:39.110468 1578 log.go:172] (0xc000a4d4a0) (0xc000a90320) Create stream\nI0308 15:36:39.110512 1578 log.go:172] (0xc000a4d4a0) (0xc000a90320) Stream added, broadcasting: 1\nI0308 15:36:39.112250 1578 log.go:172] (0xc000a4d4a0) Reply frame received for 1\nI0308 15:36:39.112281 1578 log.go:172] (0xc000a4d4a0) (0xc000ab20a0) Create stream\nI0308 15:36:39.112293 1578 log.go:172] (0xc000a4d4a0) (0xc000ab20a0) Stream added, broadcasting: 3\nI0308 15:36:39.113118 1578 log.go:172] (0xc000a4d4a0) Reply frame received for 3\nI0308 15:36:39.113162 1578 log.go:172] (0xc000a4d4a0) (0xc000a321e0) Create stream\nI0308 15:36:39.113178 1578 log.go:172] (0xc000a4d4a0) (0xc000a321e0) Stream added, broadcasting: 5\nI0308 15:36:39.114008 1578 log.go:172] (0xc000a4d4a0) Reply frame received for 5\nI0308 15:36:39.168702 1578 log.go:172] (0xc000a4d4a0) Data frame received for 5\nI0308 15:36:39.168722 1578 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0308 15:36:39.168733 1578 log.go:172] (0xc000a321e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:36:39.205058 1578 log.go:172] (0xc000a4d4a0) Data frame received for 3\nI0308 15:36:39.205097 1578 log.go:172] (0xc000ab20a0) (3) Data frame handling\nI0308 15:36:39.205128 1578 log.go:172] (0xc000ab20a0) (3) Data frame sent\nI0308 15:36:39.205144 1578 log.go:172] (0xc000a4d4a0) Data frame received for 3\nI0308 15:36:39.205168 1578 log.go:172] (0xc000ab20a0) (3) Data frame handling\nI0308 15:36:39.205280 1578 log.go:172] (0xc000a4d4a0) Data frame received for 5\nI0308 15:36:39.205312 1578 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0308 15:36:39.206843 1578 log.go:172] (0xc000a4d4a0) Data frame received for 1\nI0308 15:36:39.206875 1578 log.go:172] (0xc000a90320) (1) Data frame handling\nI0308 15:36:39.206888 1578 log.go:172] (0xc000a90320) (1) Data frame sent\nI0308 15:36:39.206927 1578 log.go:172] (0xc000a4d4a0) (0xc000a90320) Stream removed, broadcasting: 1\nI0308 15:36:39.206953 1578 log.go:172] (0xc000a4d4a0) Go away received\nI0308 15:36:39.207292 1578 log.go:172] (0xc000a4d4a0) (0xc000a90320) Stream removed, broadcasting: 1\nI0308 15:36:39.207310 1578 log.go:172] (0xc000a4d4a0) (0xc000ab20a0) Stream removed, broadcasting: 3\nI0308 15:36:39.207317 1578 log.go:172] (0xc000a4d4a0) (0xc000a321e0) Stream removed, broadcasting: 5\n" Mar 8 15:36:39.210: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:36:39.210: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 15:36:49.245: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 15:36:59.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5318 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:36:59.425: INFO: stderr: "I0308 15:36:59.373945 1598 log.go:172] (0xc00003a840) (0xc00062dc20) Create stream\nI0308 15:36:59.373976 1598 log.go:172] (0xc00003a840) (0xc00062dc20) Stream added, broadcasting: 1\nI0308 15:36:59.375599 1598 log.go:172] (0xc00003a840) Reply frame received for 1\nI0308 15:36:59.375620 1598 log.go:172] (0xc00003a840) (0xc00062dcc0) Create stream\nI0308 15:36:59.375625 1598 log.go:172] (0xc00003a840) (0xc00062dcc0) Stream added, broadcasting: 3\nI0308 15:36:59.376246 1598 log.go:172] (0xc00003a840) Reply frame received for 3\nI0308 15:36:59.376279 1598 log.go:172] (0xc00003a840) (0xc00062dd60) Create stream\nI0308 15:36:59.376290 1598 log.go:172] (0xc00003a840) (0xc00062dd60) Stream added, broadcasting: 5\nI0308 15:36:59.376877 1598 log.go:172] (0xc00003a840) Reply frame received for 5\nI0308 15:36:59.422928 1598 log.go:172] (0xc00003a840) Data frame received for 5\nI0308 15:36:59.422948 1598 log.go:172] (0xc00062dd60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:36:59.422965 1598 log.go:172] (0xc00003a840) Data frame received for 3\nI0308 15:36:59.422986 1598 log.go:172] (0xc00062dcc0) (3) Data frame handling\nI0308 15:36:59.422998 1598 log.go:172] (0xc00062dcc0) (3) Data frame sent\nI0308 15:36:59.423011 1598 log.go:172] (0xc00003a840) Data frame received for 3\nI0308 15:36:59.423022 1598 log.go:172] (0xc00062dcc0) (3) Data frame handling\nI0308 15:36:59.423035 1598 log.go:172] (0xc00062dd60) (5) Data frame sent\nI0308 15:36:59.423047 1598 log.go:172] (0xc00003a840) Data frame received for 5\nI0308 15:36:59.423055 1598 log.go:172] (0xc00062dd60) (5) Data frame handling\nI0308 15:36:59.423584 1598 log.go:172] (0xc00003a840) Data frame received for 1\nI0308 15:36:59.423592 1598 log.go:172] (0xc00062dc20) (1) Data frame handling\nI0308 15:36:59.423599 1598 log.go:172] (0xc00062dc20) (1) Data frame sent\nI0308 15:36:59.423608 1598 log.go:172] (0xc00003a840) (0xc00062dc20) Stream removed, broadcasting: 1\nI0308 15:36:59.423716 1598 log.go:172] (0xc00003a840) Go away received\nI0308 15:36:59.423833 1598 log.go:172] (0xc00003a840) (0xc00062dc20) Stream removed, broadcasting: 1\nI0308 15:36:59.423846 1598 log.go:172] (0xc00003a840) (0xc00062dcc0) Stream removed, broadcasting: 3\nI0308 15:36:59.423856 1598 log.go:172] (0xc00003a840) (0xc00062dd60) Stream removed, broadcasting: 5\n" Mar 8 15:36:59.425: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:36:59.425: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:37:09.450: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:37:09.450: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:09.450: INFO: Waiting for Pod statefulset-5318/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:09.450: INFO: Waiting for Pod statefulset-5318/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:19.493: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:37:19.493: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:19.493: INFO: Waiting for Pod statefulset-5318/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:29.459: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:37:29.459: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:29.459: INFO: Waiting for Pod statefulset-5318/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:39.456: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:37:39.456: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:37:49.465: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:37:49.465: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 8 15:37:59.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5318 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 15:37:59.676: INFO: stderr: "I0308 15:37:59.579933 1616 log.go:172] (0xc000a796b0) (0xc000a186e0) Create stream\nI0308 15:37:59.579980 1616 log.go:172] (0xc000a796b0) (0xc000a186e0) Stream added, broadcasting: 1\nI0308 15:37:59.583302 1616 log.go:172] (0xc000a796b0) Reply frame received for 1\nI0308 15:37:59.583331 1616 log.go:172] (0xc000a796b0) (0xc000700500) Create stream\nI0308 15:37:59.583338 1616 log.go:172] (0xc000a796b0) (0xc000700500) Stream added, broadcasting: 3\nI0308 15:37:59.583986 1616 log.go:172] (0xc000a796b0) Reply frame received for 3\nI0308 15:37:59.584018 1616 log.go:172] (0xc000a796b0) (0xc000531180) Create stream\nI0308 15:37:59.584029 1616 log.go:172] (0xc000a796b0) (0xc000531180) Stream added, broadcasting: 5\nI0308 15:37:59.584646 1616 log.go:172] (0xc000a796b0) Reply frame received for 5\nI0308 15:37:59.648714 1616 log.go:172] (0xc000a796b0) Data frame received for 5\nI0308 15:37:59.648735 1616 log.go:172] (0xc000531180) (5) Data frame handling\nI0308 15:37:59.648748 1616 log.go:172] (0xc000531180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 15:37:59.668929 1616 log.go:172] (0xc000a796b0) Data frame received for 3\nI0308 15:37:59.668962 1616 log.go:172] (0xc000700500) (3) Data frame handling\nI0308 15:37:59.668981 1616 log.go:172] (0xc000700500) (3) Data frame sent\nI0308 15:37:59.668991 1616 log.go:172] (0xc000a796b0) Data frame received for 3\nI0308 15:37:59.669012 1616 log.go:172] (0xc000700500) (3) Data frame handling\nI0308 15:37:59.669508 1616 log.go:172] (0xc000a796b0) Data frame received for 5\nI0308 15:37:59.669531 1616 log.go:172] (0xc000531180) (5) Data frame handling\nI0308 15:37:59.673051 1616 log.go:172] (0xc000a796b0) Data frame received for 1\nI0308 15:37:59.673069 1616 log.go:172] (0xc000a186e0) (1) Data frame handling\nI0308 15:37:59.673086 1616 log.go:172] (0xc000a186e0) (1) Data frame sent\nI0308 15:37:59.673273 1616 log.go:172] (0xc000a796b0) (0xc000a186e0) Stream removed, broadcasting: 1\nI0308 15:37:59.673508 1616 log.go:172] (0xc000a796b0) Go away received\nI0308 15:37:59.673622 1616 log.go:172] (0xc000a796b0) (0xc000a186e0) Stream removed, broadcasting: 1\nI0308 15:37:59.673644 1616 log.go:172] (0xc000a796b0) (0xc000700500) Stream removed, broadcasting: 3\nI0308 15:37:59.673657 1616 log.go:172] (0xc000a796b0) (0xc000531180) Stream removed, broadcasting: 5\n" Mar 8 15:37:59.676: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 15:37:59.676: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 15:38:09.742: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 15:38:19.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5318 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 15:38:20.037: INFO: stderr: "I0308 15:38:19.967399 1636 log.go:172] (0xc00003a160) (0xc000671ea0) Create stream\nI0308 15:38:19.967446 1636 log.go:172] (0xc00003a160) (0xc000671ea0) Stream added, broadcasting: 1\nI0308 15:38:19.969763 1636 log.go:172] (0xc00003a160) Reply frame received for 1\nI0308 15:38:19.969794 1636 log.go:172] (0xc00003a160) (0xc000671f40) Create stream\nI0308 15:38:19.969804 1636 log.go:172] (0xc00003a160) (0xc000671f40) Stream added, broadcasting: 3\nI0308 15:38:19.970613 1636 log.go:172] (0xc00003a160) Reply frame received for 3\nI0308 15:38:19.970654 1636 log.go:172] (0xc00003a160) (0xc00062e820) Create stream\nI0308 15:38:19.970665 1636 log.go:172] (0xc00003a160) (0xc00062e820) Stream added, broadcasting: 5\nI0308 15:38:19.971693 1636 log.go:172] (0xc00003a160) Reply frame received for 5\nI0308 15:38:20.032693 1636 log.go:172] (0xc00003a160) Data frame received for 5\nI0308 15:38:20.032727 1636 log.go:172] (0xc00062e820) (5) Data frame handling\nI0308 15:38:20.032739 1636 log.go:172] (0xc00062e820) (5) Data frame sent\nI0308 15:38:20.032749 1636 log.go:172] (0xc00003a160) Data frame received for 5\nI0308 15:38:20.032774 1636 log.go:172] (0xc00003a160) Data frame received for 3\nI0308 15:38:20.032796 1636 log.go:172] (0xc000671f40) (3) Data frame handling\nI0308 15:38:20.032812 1636 log.go:172] (0xc000671f40) (3) Data frame sent\nI0308 15:38:20.032826 1636 log.go:172] (0xc00003a160) Data frame received for 3\nI0308 15:38:20.032835 1636 log.go:172] (0xc000671f40) (3) Data frame handling\nI0308 15:38:20.032859 1636 log.go:172] (0xc00062e820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 15:38:20.034482 1636 log.go:172] (0xc00003a160) Data frame received for 1\nI0308 15:38:20.034497 1636 log.go:172] (0xc000671ea0) (1) Data frame handling\nI0308 15:38:20.034514 1636 log.go:172] (0xc000671ea0) (1) Data frame sent\nI0308 15:38:20.034531 1636 log.go:172] (0xc00003a160) (0xc000671ea0) Stream removed, broadcasting: 1\nI0308 15:38:20.034550 1636 log.go:172] (0xc00003a160) Go away received\nI0308 15:38:20.034857 1636 log.go:172] (0xc00003a160) (0xc000671ea0) Stream removed, broadcasting: 1\nI0308 15:38:20.034877 1636 log.go:172] (0xc00003a160) (0xc000671f40) Stream removed, broadcasting: 3\nI0308 15:38:20.034885 1636 log.go:172] (0xc00003a160) (0xc00062e820) Stream removed, broadcasting: 5\n" Mar 8 15:38:20.038: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 15:38:20.038: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 15:38:40.059: INFO: Waiting for StatefulSet statefulset-5318/ss2 to complete update Mar 8 15:38:40.059: INFO: Waiting for Pod statefulset-5318/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 15:38:50.065: INFO: Deleting all statefulset in ns statefulset-5318 Mar 8 15:38:50.067: INFO: Scaling statefulset ss2 to 0 Mar 8 15:39:30.078: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:39:30.081: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:30.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5318" for this suite. • [SLOW TEST:181.238 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":146,"skipped":1994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:30.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-cbbf01b2-af7a-4ba1-bbc6-d05b0aca1f67 STEP: Creating secret with name s-test-opt-upd-3ed23326-758e-445c-9fee-2849034c15a9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cbbf01b2-af7a-4ba1-bbc6-d05b0aca1f67 STEP: Updating secret s-test-opt-upd-3ed23326-758e-445c-9fee-2849034c15a9 STEP: Creating secret with name s-test-opt-create-efeb9f8b-181d-43b8-bf5d-6cbbe593de42 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:36.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3999" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":147,"skipped":2027,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:36.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:39:36.389: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-33b10c48-922b-4cc4-8866-aefa2e2ef4ba" in namespace "security-context-test-2480" to be "success or failure" Mar 8 15:39:36.403: INFO: Pod "busybox-readonly-false-33b10c48-922b-4cc4-8866-aefa2e2ef4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 14.21596ms Mar 8 15:39:38.406: INFO: Pod "busybox-readonly-false-33b10c48-922b-4cc4-8866-aefa2e2ef4ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017388203s Mar 8 15:39:38.406: INFO: Pod "busybox-readonly-false-33b10c48-922b-4cc4-8866-aefa2e2ef4ba" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2480" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":148,"skipped":2052,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:38.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:39:38.513: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:39.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6064" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":149,"skipped":2055,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:39.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 15:39:41.314: INFO: &Pod{ObjectMeta:{send-events-48bd8265-7e28-40e1-830c-aabc045badac events-1562 /api/v1/namespaces/events-1562/pods/send-events-48bd8265-7e28-40e1-830c-aabc045badac ae3b3656-df28-48b2-af0b-f1f6b2cc269f 19378 0 2020-03-08 15:39:39 +0000 UTC map[name:foo time:287479884] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gp6vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gp6vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gp6vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:39:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:39:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:39:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 15:39:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.64,StartTime:2020-03-08 15:39:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 15:39:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7c99566bf3d55c5de5d3e98865b068451ea3a1d90c3e73f46c32ed1d3df62e02,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 8 15:39:43.318: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 15:39:45.322: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:45.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1562" for this suite. • [SLOW TEST:6.293 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":150,"skipped":2073,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:45.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:39:46.052: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:39:48.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278786, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278786, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278786, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719278786, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:39:51.091: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 8 15:39:51.113: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:39:51.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3198" for this suite. STEP: Destroying namespace "webhook-3198-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":151,"skipped":2074,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:39:51.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2af34341-5bb2-40a1-8ba3-6283b4a22643 in namespace container-probe-3243 Mar 8 15:39:53.329: INFO: Started pod liveness-2af34341-5bb2-40a1-8ba3-6283b4a22643 in namespace container-probe-3243 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:39:53.331: INFO: Initial restart count of pod liveness-2af34341-5bb2-40a1-8ba3-6283b4a22643 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:43:53.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3243" for this suite. • [SLOW TEST:242.709 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:43:53.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:43:54.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:43:56.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279034, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279034, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279034, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279034, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:43:59.479: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:43:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7296" for this suite. STEP: Destroying namespace "webhook-7296-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.801 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":153,"skipped":2135,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:43:59.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 8 15:43:59.783: INFO: >>> kubeConfig: /root/.kube/config Mar 8 15:44:01.600: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:44:11.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6035" for this suite. • [SLOW TEST:12.130 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":154,"skipped":2155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:44:11.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-346104bb-6341-4a0e-98c0-9ea9351dd638 STEP: Creating a pod to test consume secrets Mar 8 15:44:11.957: INFO: Waiting up to 5m0s for pod "pod-secrets-18197a54-394c-4342-8915-afc4a5679245" in namespace "secrets-8622" to be "success or failure" Mar 8 15:44:11.979: INFO: Pod "pod-secrets-18197a54-394c-4342-8915-afc4a5679245": Phase="Pending", Reason="", readiness=false. Elapsed: 22.236577ms Mar 8 15:44:13.984: INFO: Pod "pod-secrets-18197a54-394c-4342-8915-afc4a5679245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026698359s STEP: Saw pod success Mar 8 15:44:13.984: INFO: Pod "pod-secrets-18197a54-394c-4342-8915-afc4a5679245" satisfied condition "success or failure" Mar 8 15:44:13.986: INFO: Trying to get logs from node latest-worker pod pod-secrets-18197a54-394c-4342-8915-afc4a5679245 container secret-volume-test: STEP: delete the pod Mar 8 15:44:14.292: INFO: Waiting for pod pod-secrets-18197a54-394c-4342-8915-afc4a5679245 to disappear Mar 8 15:44:14.303: INFO: Pod pod-secrets-18197a54-394c-4342-8915-afc4a5679245 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:44:14.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8622" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":155,"skipped":2220,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:44:14.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 8 15:44:14.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2105' Mar 8 15:44:14.688: INFO: stderr: "" Mar 8 15:44:14.688: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:44:14.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2105' Mar 8 15:44:14.759: INFO: stderr: "" Mar 8 15:44:14.759: INFO: stdout: "update-demo-nautilus-cjj7p update-demo-nautilus-mj55w " Mar 8 15:44:14.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjj7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2105' Mar 8 15:44:14.839: INFO: stderr: "" Mar 8 15:44:14.839: INFO: stdout: "" Mar 8 15:44:14.839: INFO: update-demo-nautilus-cjj7p is created but not running Mar 8 15:44:19.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2105' Mar 8 15:44:19.954: INFO: stderr: "" Mar 8 15:44:19.954: INFO: stdout: "update-demo-nautilus-cjj7p update-demo-nautilus-mj55w " Mar 8 15:44:19.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjj7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2105' Mar 8 15:44:20.051: INFO: stderr: "" Mar 8 15:44:20.051: INFO: stdout: "true" Mar 8 15:44:20.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjj7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2105' Mar 8 15:44:20.131: INFO: stderr: "" Mar 8 15:44:20.131: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:44:20.131: INFO: validating pod update-demo-nautilus-cjj7p Mar 8 15:44:20.134: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:44:20.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:44:20.134: INFO: update-demo-nautilus-cjj7p is verified up and running Mar 8 15:44:20.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mj55w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2105' Mar 8 15:44:20.204: INFO: stderr: "" Mar 8 15:44:20.204: INFO: stdout: "true" Mar 8 15:44:20.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mj55w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2105' Mar 8 15:44:20.271: INFO: stderr: "" Mar 8 15:44:20.271: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:44:20.271: INFO: validating pod update-demo-nautilus-mj55w Mar 8 15:44:20.274: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:44:20.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:44:20.274: INFO: update-demo-nautilus-mj55w is verified up and running STEP: using delete to clean up resources Mar 8 15:44:20.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2105' Mar 8 15:44:20.346: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:44:20.346: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 15:44:20.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2105' Mar 8 15:44:20.408: INFO: stderr: "No resources found in kubectl-2105 namespace.\n" Mar 8 15:44:20.408: INFO: stdout: "" Mar 8 15:44:20.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2105 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:44:20.468: INFO: stderr: "" Mar 8 15:44:20.468: INFO: stdout: "update-demo-nautilus-cjj7p\nupdate-demo-nautilus-mj55w\n" Mar 8 15:44:20.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2105' Mar 8 15:44:21.081: INFO: stderr: "No resources found in kubectl-2105 namespace.\n" Mar 8 15:44:21.081: INFO: stdout: "" Mar 8 15:44:21.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2105 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:44:21.187: INFO: stderr: "" Mar 8 15:44:21.188: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:44:21.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2105" for this suite. • [SLOW TEST:6.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":156,"skipped":2242,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:44:21.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Mar 8 15:44:21.293: INFO: Waiting up to 5m0s for pod "client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398" in namespace "containers-5378" to be "success or failure" Mar 8 15:44:21.296: INFO: Pod "client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541965ms Mar 8 15:44:23.300: INFO: Pod "client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006173196s STEP: Saw pod success Mar 8 15:44:23.300: INFO: Pod "client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398" satisfied condition "success or failure" Mar 8 15:44:23.302: INFO: Trying to get logs from node latest-worker pod client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398 container test-container: STEP: delete the pod Mar 8 15:44:23.318: INFO: Waiting for pod client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398 to disappear Mar 8 15:44:23.350: INFO: Pod client-containers-398340ce-9710-44f6-a61a-3b7ff3b2e398 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:44:23.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5378" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2253,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:44:23.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:44:23.413: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:44:24.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5431" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":280,"completed":158,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:44:24.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 15:44:24.768: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 15:44:24.777: INFO: Waiting for terminating namespaces to be deleted... Mar 8 15:44:24.779: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 15:44:24.783: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:44:24.783: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:44:24.783: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 15:44:24.783: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:44:24.783: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 15:44:24.799: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:44:24.799: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:44:24.799: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 15:44:24.799: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:44:24.799: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 15:44:24.799: INFO: Container coredns ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-da758b4d-f99e-440b-a5eb-42f46520dd20 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-da758b4d-f99e-440b-a5eb-42f46520dd20 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-da758b4d-f99e-440b-a5eb-42f46520dd20 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:49:28.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5423" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:304.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":159,"skipped":2290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:49:29.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7390 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7390;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7390 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7390;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7390.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7390.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7390.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7390.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7390.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7390.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7390.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.201.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.201.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.201.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.201.225_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7390 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7390;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7390 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7390;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7390.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7390.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7390.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7390.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7390.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7390.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7390.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7390.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7390.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.201.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.201.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.201.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.201.225_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 15:49:33.335: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.338: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.342: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.347: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.350: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.353: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.375: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.377: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.380: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.383: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.386: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.388: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.390: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.393: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:33.407: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:49:38.411: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.414: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.420: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.422: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.425: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.427: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.430: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.450: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.452: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.455: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.459: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.462: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.464: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:38.480: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:49:43.410: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.412: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.414: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.416: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.421: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.423: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.435: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.437: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.439: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.442: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.445: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.447: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:43.457: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:49:48.410: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.413: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.415: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.418: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.424: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.426: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.440: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.442: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.444: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.446: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.448: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:48.467: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:49:53.412: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.414: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.420: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.423: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.425: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.428: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.431: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.448: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.451: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.453: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.458: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.465: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:53.479: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:49:58.424: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.427: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.429: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.431: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.438: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.440: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.457: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.459: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.461: INFO: Unable to read jessie_udp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390 from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.466: INFO: Unable to read jessie_udp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.468: INFO: Unable to read jessie_tcp@dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.470: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.472: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc from pod dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb: the server could not find the requested resource (get pods dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb) Mar 8 15:49:58.486: INFO: Lookups using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7390 wheezy_tcp@dns-test-service.dns-7390 wheezy_udp@dns-test-service.dns-7390.svc wheezy_tcp@dns-test-service.dns-7390.svc wheezy_udp@_http._tcp.dns-test-service.dns-7390.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7390.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7390 jessie_tcp@dns-test-service.dns-7390 jessie_udp@dns-test-service.dns-7390.svc jessie_tcp@dns-test-service.dns-7390.svc jessie_udp@_http._tcp.dns-test-service.dns-7390.svc jessie_tcp@_http._tcp.dns-test-service.dns-7390.svc] Mar 8 15:50:03.468: INFO: DNS probes using dns-7390/dns-test-3854b549-d87d-4622-ac9b-cd59d01147bb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:50:03.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7390" for this suite. • [SLOW TEST:34.679 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":160,"skipped":2314,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:50:03.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:50:07.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6663" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":161,"skipped":2326,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:50:07.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 8 15:50:07.909: INFO: PodSpec: initContainers in spec.initContainers Mar 8 15:50:53.250: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8af60be0-c6d1-4c98-ae82-24809c4585df", GenerateName:"", Namespace:"init-container-1123", SelfLink:"/api/v1/namespaces/init-container-1123/pods/pod-init-8af60be0-c6d1-4c98-ae82-24809c4585df", UID:"5aa0f900-3ad1-4f37-9ff9-d075658af812", ResourceVersion:"21854", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719279407, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"909521873"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h4nfp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005490180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h4nfp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h4nfp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h4nfp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022c6448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029f6060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022c65a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022c6600)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022c6608), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022c660c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279408, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279408, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279408, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279407, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.192", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.192"}}, StartTime:(*v1.Time)(0xc004bbc160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028de150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028de1c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://982942bf851f8a12b0bb1ff8cf8d43020e4ef1a5dbf82778355d5999538e7a61", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bbc1a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bbc180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0022c66df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:50:53.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1123" for this suite. • [SLOW TEST:45.462 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":162,"skipped":2338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:50:53.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:50:53.368: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a" in namespace "projected-4307" to be "success or failure" Mar 8 15:50:53.395: INFO: Pod "downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.723804ms Mar 8 15:50:55.399: INFO: Pod "downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030419846s STEP: Saw pod success Mar 8 15:50:55.399: INFO: Pod "downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a" satisfied condition "success or failure" Mar 8 15:50:55.401: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a container client-container: STEP: delete the pod Mar 8 15:50:55.442: INFO: Waiting for pod downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a to disappear Mar 8 15:50:55.449: INFO: Pod downwardapi-volume-c1b484ce-0cc3-450e-92b3-e48c5700010a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:50:55.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4307" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:50:55.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:01.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4853" for this suite. STEP: Destroying namespace "nsdeletetest-2409" for this suite. Mar 8 15:51:01.716: INFO: Namespace nsdeletetest-2409 was already deleted STEP: Destroying namespace "nsdeletetest-3540" for this suite. • [SLOW TEST:6.263 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":164,"skipped":2397,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:01.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:12.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8507" for this suite. • [SLOW TEST:11.102 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":165,"skipped":2397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:12.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:24.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5935" for this suite. • [SLOW TEST:11.191 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":166,"skipped":2426,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:24.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-431" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":167,"skipped":2444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:28.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-2c712c94-c49c-45ee-baa9-f05b592d3848 STEP: Creating a pod to test consume configMaps Mar 8 15:51:28.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533" in namespace "configmap-4177" to be "success or failure" Mar 8 15:51:28.197: INFO: Pod "pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533": Phase="Pending", Reason="", readiness=false. Elapsed: 15.20024ms Mar 8 15:51:30.209: INFO: Pod "pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026534809s Mar 8 15:51:32.212: INFO: Pod "pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030256122s STEP: Saw pod success Mar 8 15:51:32.212: INFO: Pod "pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533" satisfied condition "success or failure" Mar 8 15:51:32.215: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533 container configmap-volume-test: STEP: delete the pod Mar 8 15:51:32.233: INFO: Waiting for pod pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533 to disappear Mar 8 15:51:32.237: INFO: Pod pod-configmaps-c2a78d0b-1f28-4989-833d-bccb4699d533 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:32.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4177" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2474,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:32.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:51:32.326: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:51:34.347: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:36.329: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:38.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:40.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:42.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:44.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:46.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:48.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = false) Mar 8 15:51:50.330: INFO: The status of Pod test-webserver-e6193313-d293-49c0-bd3a-a6ab936cb0a2 is Running (Ready = true) Mar 8 15:51:50.332: INFO: Container started at 2020-03-08 15:51:33 +0000 UTC, pod became ready at 2020-03-08 15:51:50 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:50.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9048" for this suite. • [SLOW TEST:18.094 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":169,"skipped":2480,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:50.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-b61204e0-9f03-4069-9edd-9b86156c224a STEP: Creating a pod to test consume configMaps Mar 8 15:51:50.453: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa" in namespace "projected-1737" to be "success or failure" Mar 8 15:51:50.469: INFO: Pod "pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 15.661929ms Mar 8 15:51:52.473: INFO: Pod "pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01993329s STEP: Saw pod success Mar 8 15:51:52.473: INFO: Pod "pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa" satisfied condition "success or failure" Mar 8 15:51:52.476: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:51:52.506: INFO: Waiting for pod pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa to disappear Mar 8 15:51:52.510: INFO: Pod pod-projected-configmaps-9381b28c-4696-41cd-b422-23a7bc02c9aa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:52.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1737" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:52.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 15:51:52.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1895' Mar 8 15:51:54.764: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:51:54.764: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 8 15:51:54.781: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-j5hjx] Mar 8 15:51:54.781: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-j5hjx" in namespace "kubectl-1895" to be "running and ready" Mar 8 15:51:54.814: INFO: Pod "e2e-test-httpd-rc-j5hjx": Phase="Pending", Reason="", readiness=false. Elapsed: 33.419847ms Mar 8 15:51:56.818: INFO: Pod "e2e-test-httpd-rc-j5hjx": Phase="Running", Reason="", readiness=true. Elapsed: 2.0372979s Mar 8 15:51:56.818: INFO: Pod "e2e-test-httpd-rc-j5hjx" satisfied condition "running and ready" Mar 8 15:51:56.818: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-j5hjx] Mar 8 15:51:56.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1895' Mar 8 15:51:56.955: INFO: stderr: "" Mar 8 15:51:56.956: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.67. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.67. Set the 'ServerName' directive globally to suppress this message\n[Sun Mar 08 15:51:56.017749 2020] [mpm_event:notice] [pid 1:tid 140067607833448] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Mar 08 15:51:56.017799 2020] [core:notice] [pid 1:tid 140067607833448] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Mar 8 15:51:56.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1895' Mar 8 15:51:57.082: INFO: stderr: "" Mar 8 15:51:57.082: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:51:57.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1895" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":171,"skipped":2517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:51:57.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4155 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 8 15:51:57.241: INFO: Found 0 stateful pods, waiting for 3 Mar 8 15:52:07.246: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:52:07.246: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:52:07.246: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 15:52:07.275: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 15:52:17.313: INFO: Updating stateful set ss2 Mar 8 15:52:17.370: INFO: Waiting for Pod statefulset-4155/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 8 15:52:27.466: INFO: Found 2 stateful pods, waiting for 3 Mar 8 15:52:37.472: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:52:37.472: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:52:37.472: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 15:52:37.496: INFO: Updating stateful set ss2 Mar 8 15:52:37.512: INFO: Waiting for Pod statefulset-4155/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 15:52:47.536: INFO: Updating stateful set ss2 Mar 8 15:52:47.564: INFO: Waiting for StatefulSet statefulset-4155/ss2 to complete update Mar 8 15:52:47.564: INFO: Waiting for Pod statefulset-4155/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 15:52:57.571: INFO: Deleting all statefulset in ns statefulset-4155 Mar 8 15:52:57.573: INFO: Scaling statefulset ss2 to 0 Mar 8 15:53:17.591: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:53:17.593: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:53:17.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4155" for this suite. • [SLOW TEST:80.541 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":172,"skipped":2618,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:53:17.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 15:53:17.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7125' Mar 8 15:53:17.841: INFO: stderr: "" Mar 8 15:53:17.841: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 8 15:53:22.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7125 -o json' Mar 8 15:53:22.985: INFO: stderr: "" Mar 8 15:53:22.985: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T15:53:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7125\",\n \"resourceVersion\": \"22751\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7125/pods/e2e-test-httpd-pod\",\n \"uid\": \"e930c026-d87f-4d32-acfa-ca3c58932aae\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kfcbm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kfcbm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kfcbm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:53:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:53:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:53:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:53:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://104c6757d5a28515145eb28865e0751a818f59546f7a6bbcc79b92f975250e96\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T15:53:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.202\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.202\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T15:53:17Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 15:53:22.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7125' Mar 8 15:53:23.243: INFO: stderr: "" Mar 8 15:53:23.243: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Mar 8 15:53:23.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7125' Mar 8 15:53:32.506: INFO: stderr: "" Mar 8 15:53:32.506: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:53:32.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7125" for this suite. • [SLOW TEST:14.883 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":173,"skipped":2621,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:53:32.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:53:39.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6679" for this suite. • [SLOW TEST:7.069 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":174,"skipped":2642,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:53:39.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:53:52.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6133" for this suite. • [SLOW TEST:13.153 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":175,"skipped":2643,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:53:52.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0308 15:54:32.866407 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:54:32.866: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:54:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3425" for this suite. • [SLOW TEST:40.134 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":176,"skipped":2655,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:54:32.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:54:32.996: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 15:54:35.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7896 create -f -' Mar 8 15:54:38.736: INFO: stderr: "" Mar 8 15:54:38.736: INFO: stdout: "e2e-test-crd-publish-openapi-7159-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 15:54:38.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7896 delete e2e-test-crd-publish-openapi-7159-crds test-cr' Mar 8 15:54:38.841: INFO: stderr: "" Mar 8 15:54:38.841: INFO: stdout: "e2e-test-crd-publish-openapi-7159-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 8 15:54:38.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7896 apply -f -' Mar 8 15:54:39.150: INFO: stderr: "" Mar 8 15:54:39.150: INFO: stdout: "e2e-test-crd-publish-openapi-7159-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 15:54:39.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7896 delete e2e-test-crd-publish-openapi-7159-crds test-cr' Mar 8 15:54:39.277: INFO: stderr: "" Mar 8 15:54:39.277: INFO: stdout: "e2e-test-crd-publish-openapi-7159-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 15:54:39.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7159-crds' Mar 8 15:54:39.524: INFO: stderr: "" Mar 8 15:54:39.524: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7159-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:54:42.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7896" for this suite. • [SLOW TEST:9.419 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":177,"skipped":2658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:54:42.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-6ae2dea2-e8f2-4a14-94d1-40e90762ebbc STEP: Creating a pod to test consume configMaps Mar 8 15:54:42.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c" in namespace "projected-6059" to be "success or failure" Mar 8 15:54:43.002: INFO: Pod "pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c": Phase="Pending", Reason="", readiness=false. Elapsed: 153.980475ms Mar 8 15:54:45.005: INFO: Pod "pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.157736853s STEP: Saw pod success Mar 8 15:54:45.006: INFO: Pod "pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c" satisfied condition "success or failure" Mar 8 15:54:45.008: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:54:45.050: INFO: Waiting for pod pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c to disappear Mar 8 15:54:45.054: INFO: Pod pod-projected-configmaps-2834ff8d-8215-4c78-b2c8-5511fedfd37c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:54:45.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6059" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":178,"skipped":2689,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:54:45.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[] Mar 8 15:54:45.160: INFO: Get endpoints failed (9.094007ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 15:54:46.163: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[] (1.012271245s elapsed) STEP: Creating pod pod1 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod1:[80]] Mar 8 15:54:48.223: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod1:[80]] (2.055583912s elapsed) STEP: Creating pod pod2 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 15:54:50.816: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod1:[80] pod2:[80]] (2.589942347s elapsed) STEP: Deleting pod pod1 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod2:[80]] Mar 8 15:54:50.861: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod2:[80]] (33.386311ms elapsed) STEP: Deleting pod pod2 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[] Mar 8 15:54:51.910: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[] (1.044189688s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:54:52.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9329" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.996 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":280,"completed":179,"skipped":2697,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:54:52.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:54:52.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 15:54:54.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719279692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:54:57.826: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:54:58.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9823" for this suite. STEP: Destroying namespace "webhook-9823-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.378 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":180,"skipped":2702,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:54:58.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override arguments Mar 8 15:54:58.595: INFO: Waiting up to 5m0s for pod "client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155" in namespace "containers-8247" to be "success or failure" Mar 8 15:54:58.606: INFO: Pod "client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155": Phase="Pending", Reason="", readiness=false. Elapsed: 10.798036ms Mar 8 15:55:00.609: INFO: Pod "client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014096436s Mar 8 15:55:02.613: INFO: Pod "client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01784022s STEP: Saw pod success Mar 8 15:55:02.613: INFO: Pod "client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155" satisfied condition "success or failure" Mar 8 15:55:02.616: INFO: Trying to get logs from node latest-worker pod client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155 container test-container: STEP: delete the pod Mar 8 15:55:02.657: INFO: Waiting for pod client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155 to disappear Mar 8 15:55:02.661: INFO: Pod client-containers-31f08f5b-b5c4-494a-8adc-f4564bb1d155 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:55:02.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8247" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":181,"skipped":2708,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:55:02.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-4636 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 15:55:02.725: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 15:55:02.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 15:55:04.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:06.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:08.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:10.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:12.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:14.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:16.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 15:55:18.802: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 15:55:18.806: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 15:55:20.810: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 15:55:22.810: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 15:55:24.830: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.213:8080/dial?request=hostname&protocol=udp&host=10.244.1.212&port=8081&tries=1'] Namespace:pod-network-test-4636 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:55:24.830: INFO: >>> kubeConfig: /root/.kube/config I0308 15:55:24.862889 7 log.go:172] (0xc001fd2420) (0xc001dcaa00) Create stream I0308 15:55:24.862921 7 log.go:172] (0xc001fd2420) (0xc001dcaa00) Stream added, broadcasting: 1 I0308 15:55:24.865112 7 log.go:172] (0xc001fd2420) Reply frame received for 1 I0308 15:55:24.865140 7 log.go:172] (0xc001fd2420) (0xc001eb9400) Create stream I0308 15:55:24.865150 7 log.go:172] (0xc001fd2420) (0xc001eb9400) Stream added, broadcasting: 3 I0308 15:55:24.866001 7 log.go:172] (0xc001fd2420) Reply frame received for 3 I0308 15:55:24.866026 7 log.go:172] (0xc001fd2420) (0xc001a13720) Create stream I0308 15:55:24.866036 7 log.go:172] (0xc001fd2420) (0xc001a13720) Stream added, broadcasting: 5 I0308 15:55:24.866808 7 log.go:172] (0xc001fd2420) Reply frame received for 5 I0308 15:55:24.920981 7 log.go:172] (0xc001fd2420) Data frame received for 3 I0308 15:55:24.921007 7 log.go:172] (0xc001eb9400) (3) Data frame handling I0308 15:55:24.921025 7 log.go:172] (0xc001eb9400) (3) Data frame sent I0308 15:55:24.921690 7 log.go:172] (0xc001fd2420) Data frame received for 5 I0308 15:55:24.921723 7 log.go:172] (0xc001a13720) (5) Data frame handling I0308 15:55:24.921740 7 log.go:172] (0xc001fd2420) Data frame received for 3 I0308 15:55:24.921756 7 log.go:172] (0xc001eb9400) (3) Data frame handling I0308 15:55:24.922959 7 log.go:172] (0xc001fd2420) Data frame received for 1 I0308 15:55:24.922979 7 log.go:172] (0xc001dcaa00) (1) Data frame handling I0308 15:55:24.922986 7 log.go:172] (0xc001dcaa00) (1) Data frame sent I0308 15:55:24.922995 7 log.go:172] (0xc001fd2420) (0xc001dcaa00) Stream removed, broadcasting: 1 I0308 15:55:24.923008 7 log.go:172] (0xc001fd2420) Go away received I0308 15:55:24.923194 7 log.go:172] (0xc001fd2420) (0xc001dcaa00) Stream removed, broadcasting: 1 I0308 15:55:24.923207 7 log.go:172] (0xc001fd2420) (0xc001eb9400) Stream removed, broadcasting: 3 I0308 15:55:24.923214 7 log.go:172] (0xc001fd2420) (0xc001a13720) Stream removed, broadcasting: 5 Mar 8 15:55:24.923: INFO: Waiting for responses: map[] Mar 8 15:55:24.926: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.213:8080/dial?request=hostname&protocol=udp&host=10.244.2.77&port=8081&tries=1'] Namespace:pod-network-test-4636 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:55:24.926: INFO: >>> kubeConfig: /root/.kube/config I0308 15:55:24.950493 7 log.go:172] (0xc0027288f0) (0xc00109e0a0) Create stream I0308 15:55:24.950522 7 log.go:172] (0xc0027288f0) (0xc00109e0a0) Stream added, broadcasting: 1 I0308 15:55:24.953951 7 log.go:172] (0xc0027288f0) Reply frame received for 1 I0308 15:55:24.953991 7 log.go:172] (0xc0027288f0) (0xc00109e1e0) Create stream I0308 15:55:24.954004 7 log.go:172] (0xc0027288f0) (0xc00109e1e0) Stream added, broadcasting: 3 I0308 15:55:24.954839 7 log.go:172] (0xc0027288f0) Reply frame received for 3 I0308 15:55:24.954867 7 log.go:172] (0xc0027288f0) (0xc001dcab40) Create stream I0308 15:55:24.954877 7 log.go:172] (0xc0027288f0) (0xc001dcab40) Stream added, broadcasting: 5 I0308 15:55:24.955839 7 log.go:172] (0xc0027288f0) Reply frame received for 5 I0308 15:55:25.005899 7 log.go:172] (0xc0027288f0) Data frame received for 5 I0308 15:55:25.005922 7 log.go:172] (0xc001dcab40) (5) Data frame handling I0308 15:55:25.005939 7 log.go:172] (0xc0027288f0) Data frame received for 3 I0308 15:55:25.005947 7 log.go:172] (0xc00109e1e0) (3) Data frame handling I0308 15:55:25.005956 7 log.go:172] (0xc00109e1e0) (3) Data frame sent I0308 15:55:25.005963 7 log.go:172] (0xc0027288f0) Data frame received for 3 I0308 15:55:25.006015 7 log.go:172] (0xc00109e1e0) (3) Data frame handling I0308 15:55:25.006048 7 log.go:172] (0xc0027288f0) Data frame received for 1 I0308 15:55:25.006065 7 log.go:172] (0xc00109e0a0) (1) Data frame handling I0308 15:55:25.006075 7 log.go:172] (0xc00109e0a0) (1) Data frame sent I0308 15:55:25.006086 7 log.go:172] (0xc0027288f0) (0xc00109e0a0) Stream removed, broadcasting: 1 I0308 15:55:25.006098 7 log.go:172] (0xc0027288f0) Go away received I0308 15:55:25.006203 7 log.go:172] (0xc0027288f0) (0xc00109e0a0) Stream removed, broadcasting: 1 I0308 15:55:25.006219 7 log.go:172] (0xc0027288f0) (0xc00109e1e0) Stream removed, broadcasting: 3 I0308 15:55:25.006225 7 log.go:172] (0xc0027288f0) (0xc001dcab40) Stream removed, broadcasting: 5 Mar 8 15:55:25.006: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:55:25.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4636" for this suite. • [SLOW TEST:22.344 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":182,"skipped":2715,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:55:25.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1847 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1847 STEP: creating replication controller externalsvc in namespace services-1847 I0308 15:55:25.231047 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1847, replica count: 2 I0308 15:55:28.281475 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 8 15:55:28.311: INFO: Creating new exec pod Mar 8 15:55:30.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1847 execpod7m9nr -- /bin/sh -x -c nslookup clusterip-service' Mar 8 15:55:30.583: INFO: stderr: "I0308 15:55:30.521028 2187 log.go:172] (0xc0009de000) (0xc00090a000) Create stream\nI0308 15:55:30.521060 2187 log.go:172] (0xc0009de000) (0xc00090a000) Stream added, broadcasting: 1\nI0308 15:55:30.522422 2187 log.go:172] (0xc0009de000) Reply frame received for 1\nI0308 15:55:30.522449 2187 log.go:172] (0xc0009de000) (0xc000a92000) Create stream\nI0308 15:55:30.522459 2187 log.go:172] (0xc0009de000) (0xc000a92000) Stream added, broadcasting: 3\nI0308 15:55:30.523076 2187 log.go:172] (0xc0009de000) Reply frame received for 3\nI0308 15:55:30.523091 2187 log.go:172] (0xc0009de000) (0xc000a920a0) Create stream\nI0308 15:55:30.523097 2187 log.go:172] (0xc0009de000) (0xc000a920a0) Stream added, broadcasting: 5\nI0308 15:55:30.523563 2187 log.go:172] (0xc0009de000) Reply frame received for 5\nI0308 15:55:30.574697 2187 log.go:172] (0xc0009de000) Data frame received for 5\nI0308 15:55:30.574713 2187 log.go:172] (0xc000a920a0) (5) Data frame handling\nI0308 15:55:30.574720 2187 log.go:172] (0xc000a920a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0308 15:55:30.578952 2187 log.go:172] (0xc0009de000) Data frame received for 3\nI0308 15:55:30.578969 2187 log.go:172] (0xc000a92000) (3) Data frame handling\nI0308 15:55:30.578978 2187 log.go:172] (0xc000a92000) (3) Data frame sent\nI0308 15:55:30.579770 2187 log.go:172] (0xc0009de000) Data frame received for 3\nI0308 15:55:30.579785 2187 log.go:172] (0xc000a92000) (3) Data frame handling\nI0308 15:55:30.579797 2187 log.go:172] (0xc000a92000) (3) Data frame sent\nI0308 15:55:30.580021 2187 log.go:172] (0xc0009de000) Data frame received for 3\nI0308 15:55:30.580031 2187 log.go:172] (0xc000a92000) (3) Data frame handling\nI0308 15:55:30.580132 2187 log.go:172] (0xc0009de000) Data frame received for 5\nI0308 15:55:30.580143 2187 log.go:172] (0xc000a920a0) (5) Data frame handling\nI0308 15:55:30.581259 2187 log.go:172] (0xc0009de000) Data frame received for 1\nI0308 15:55:30.581272 2187 log.go:172] (0xc00090a000) (1) Data frame handling\nI0308 15:55:30.581282 2187 log.go:172] (0xc00090a000) (1) Data frame sent\nI0308 15:55:30.581294 2187 log.go:172] (0xc0009de000) (0xc00090a000) Stream removed, broadcasting: 1\nI0308 15:55:30.581303 2187 log.go:172] (0xc0009de000) Go away received\nI0308 15:55:30.581556 2187 log.go:172] (0xc0009de000) (0xc00090a000) Stream removed, broadcasting: 1\nI0308 15:55:30.581567 2187 log.go:172] (0xc0009de000) (0xc000a92000) Stream removed, broadcasting: 3\nI0308 15:55:30.581571 2187 log.go:172] (0xc0009de000) (0xc000a920a0) Stream removed, broadcasting: 5\n" Mar 8 15:55:30.583: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1847.svc.cluster.local\tcanonical name = externalsvc.services-1847.svc.cluster.local.\nName:\texternalsvc.services-1847.svc.cluster.local\nAddress: 10.96.67.36\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1847, will wait for the garbage collector to delete the pods Mar 8 15:55:30.640: INFO: Deleting ReplicationController externalsvc took: 3.796797ms Mar 8 15:55:31.040: INFO: Terminating ReplicationController externalsvc pods took: 400.288409ms Mar 8 15:55:34.877: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:55:34.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1847" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.910 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":183,"skipped":2733,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:55:34.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 15:55:34.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1577' Mar 8 15:55:35.095: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:55:35.095: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 8 15:55:35.106: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 8 15:55:35.111: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 8 15:55:35.136: INFO: scanned /root for discovery docs: Mar 8 15:55:35.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1577' Mar 8 15:55:50.976: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 15:55:50.976: INFO: stdout: "Created e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b\nScaling up e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 8 15:55:50.976: INFO: stdout: "Created e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b\nScaling up e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 8 15:55:50.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1577' Mar 8 15:55:51.038: INFO: stderr: "" Mar 8 15:55:51.038: INFO: stdout: "e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b-xvcj7 e2e-test-httpd-rc-9zmrc " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Mar 8 15:55:56.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1577' Mar 8 15:55:56.135: INFO: stderr: "" Mar 8 15:55:56.135: INFO: stdout: "e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b-xvcj7 " Mar 8 15:55:56.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b-xvcj7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1577' Mar 8 15:55:56.205: INFO: stderr: "" Mar 8 15:55:56.205: INFO: stdout: "true" Mar 8 15:55:56.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b-xvcj7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1577' Mar 8 15:55:56.282: INFO: stderr: "" Mar 8 15:55:56.282: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 8 15:55:56.282: INFO: e2e-test-httpd-rc-4a18af5f95465c19a31fecacf7af929b-xvcj7 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Mar 8 15:55:56.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1577' Mar 8 15:55:56.359: INFO: stderr: "" Mar 8 15:55:56.359: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:55:56.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1577" for this suite. • [SLOW TEST:21.441 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":184,"skipped":2740,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:55:56.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-24k2 STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:55:56.418: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-24k2" in namespace "subpath-4296" to be "success or failure" Mar 8 15:55:56.435: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.452596ms Mar 8 15:55:58.439: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 2.020462372s Mar 8 15:56:00.443: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 4.024557829s Mar 8 15:56:02.447: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 6.028619718s Mar 8 15:56:04.451: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 8.032792189s Mar 8 15:56:06.455: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 10.036634968s Mar 8 15:56:08.459: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 12.040735484s Mar 8 15:56:10.463: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 14.04425487s Mar 8 15:56:12.468: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 16.049220536s Mar 8 15:56:14.472: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 18.053236808s Mar 8 15:56:16.474: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Running", Reason="", readiness=true. Elapsed: 20.055971846s Mar 8 15:56:18.478: INFO: Pod "pod-subpath-test-secret-24k2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.05982095s STEP: Saw pod success Mar 8 15:56:18.478: INFO: Pod "pod-subpath-test-secret-24k2" satisfied condition "success or failure" Mar 8 15:56:18.481: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-24k2 container test-container-subpath-secret-24k2: STEP: delete the pod Mar 8 15:56:18.522: INFO: Waiting for pod pod-subpath-test-secret-24k2 to disappear Mar 8 15:56:18.554: INFO: Pod pod-subpath-test-secret-24k2 no longer exists STEP: Deleting pod pod-subpath-test-secret-24k2 Mar 8 15:56:18.554: INFO: Deleting pod "pod-subpath-test-secret-24k2" in namespace "subpath-4296" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:18.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4296" for this suite. • [SLOW TEST:22.205 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":185,"skipped":2741,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:18.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:56:18.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55" in namespace "projected-5658" to be "success or failure" Mar 8 15:56:18.621: INFO: Pod "downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468707ms Mar 8 15:56:20.625: INFO: Pod "downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008911087s Mar 8 15:56:22.629: INFO: Pod "downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012459645s STEP: Saw pod success Mar 8 15:56:22.629: INFO: Pod "downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55" satisfied condition "success or failure" Mar 8 15:56:22.632: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55 container client-container: STEP: delete the pod Mar 8 15:56:22.683: INFO: Waiting for pod downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55 to disappear Mar 8 15:56:22.693: INFO: Pod downwardapi-volume-fba0b84e-ca4e-4a94-afb9-cd2b7697aa55 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:22.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5658" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":186,"skipped":2753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:22.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Mar 8 15:56:22.796: INFO: Waiting up to 5m0s for pod "var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157" in namespace "var-expansion-3236" to be "success or failure" Mar 8 15:56:22.800: INFO: Pod "var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172074ms Mar 8 15:56:25.022: INFO: Pod "var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226016112s Mar 8 15:56:27.026: INFO: Pod "var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230159047s STEP: Saw pod success Mar 8 15:56:27.026: INFO: Pod "var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157" satisfied condition "success or failure" Mar 8 15:56:27.029: INFO: Trying to get logs from node latest-worker pod var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157 container dapi-container: STEP: delete the pod Mar 8 15:56:27.053: INFO: Waiting for pod var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157 to disappear Mar 8 15:56:27.093: INFO: Pod var-expansion-1a753d5f-039a-42fb-9a06-d6537f069157 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:27.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3236" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":2808,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:27.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:27.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9422" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":188,"skipped":2810,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:27.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-a64f73ff-0721-4573-b109-bd3d2226423f STEP: Creating a pod to test consume secrets Mar 8 15:56:27.377: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2" in namespace "projected-4361" to be "success or failure" Mar 8 15:56:27.382: INFO: Pod "pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.25346ms Mar 8 15:56:29.386: INFO: Pod "pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009138001s STEP: Saw pod success Mar 8 15:56:29.386: INFO: Pod "pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2" satisfied condition "success or failure" Mar 8 15:56:29.388: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2 container projected-secret-volume-test: STEP: delete the pod Mar 8 15:56:29.419: INFO: Waiting for pod pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2 to disappear Mar 8 15:56:29.424: INFO: Pod pod-projected-secrets-522a75f2-8062-451d-af97-86ec421921d2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:29.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4361" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":189,"skipped":2811,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:29.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:56:29.513: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0a2e3103-1933-478c-8dce-471d0a3af65f" in namespace "security-context-test-2400" to be "success or failure" Mar 8 15:56:29.532: INFO: Pod "busybox-user-65534-0a2e3103-1933-478c-8dce-471d0a3af65f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.979692ms Mar 8 15:56:31.535: INFO: Pod "busybox-user-65534-0a2e3103-1933-478c-8dce-471d0a3af65f": Phase="Running", Reason="", readiness=true. Elapsed: 2.022360699s Mar 8 15:56:33.539: INFO: Pod "busybox-user-65534-0a2e3103-1933-478c-8dce-471d0a3af65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026192301s Mar 8 15:56:33.539: INFO: Pod "busybox-user-65534-0a2e3103-1933-478c-8dce-471d0a3af65f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:33.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2400" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":2825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:33.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 15:56:36.189: INFO: Successfully updated pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0" Mar 8 15:56:36.189: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0" in namespace "pods-8101" to be "terminated due to deadline exceeded" Mar 8 15:56:36.196: INFO: Pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0": Phase="Running", Reason="", readiness=true. Elapsed: 6.79879ms Mar 8 15:56:38.199: INFO: Pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0": Phase="Running", Reason="", readiness=true. Elapsed: 2.01013329s Mar 8 15:56:40.203: INFO: Pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.01380505s Mar 8 15:56:40.203: INFO: Pod "pod-update-activedeadlineseconds-42f04dba-2a23-445d-acf8-006c521428c0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:40.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8101" for this suite. • [SLOW TEST:6.664 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":2854,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:40.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0308 15:56:50.349529 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:56:50.349: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:50.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-69" for this suite. • [SLOW TEST:10.147 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":192,"skipped":2866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:50.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:56:50.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8" in namespace "projected-634" to be "success or failure" Mar 8 15:56:50.515: INFO: Pod "downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8": Phase="Pending", Reason="", readiness=false. Elapsed: 46.861658ms Mar 8 15:56:52.519: INFO: Pod "downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050454257s STEP: Saw pod success Mar 8 15:56:52.519: INFO: Pod "downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8" satisfied condition "success or failure" Mar 8 15:56:52.521: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8 container client-container: STEP: delete the pod Mar 8 15:56:52.540: INFO: Waiting for pod downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8 to disappear Mar 8 15:56:52.543: INFO: Pod downwardapi-volume-45d74de8-68bd-4dfb-a9a7-ddf8316420f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:52.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-634" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":2892,"failed":0} SSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:52.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:52.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8307" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":194,"skipped":2896,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:52.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-b49650a4-7740-4309-b10f-b93a69312ad8 STEP: Creating a pod to test consume secrets Mar 8 15:56:52.719: INFO: Waiting up to 5m0s for pod "pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb" in namespace "secrets-6387" to be "success or failure" Mar 8 15:56:52.756: INFO: Pod "pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.666499ms Mar 8 15:56:54.760: INFO: Pod "pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb": Phase="Running", Reason="", readiness=true. Elapsed: 2.040442038s Mar 8 15:56:56.763: INFO: Pod "pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043440247s STEP: Saw pod success Mar 8 15:56:56.763: INFO: Pod "pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb" satisfied condition "success or failure" Mar 8 15:56:56.765: INFO: Trying to get logs from node latest-worker pod pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb container secret-volume-test: STEP: delete the pod Mar 8 15:56:56.842: INFO: Waiting for pod pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb to disappear Mar 8 15:56:56.855: INFO: Pod pod-secrets-69900d0f-ffd5-4450-aab1-5251f285e8bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:56.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6387" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":2916,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:56.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 15:56:56.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67" in namespace "downward-api-7439" to be "success or failure" Mar 8 15:56:57.023: INFO: Pod "downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67": Phase="Pending", Reason="", readiness=false. Elapsed: 29.046202ms Mar 8 15:56:59.027: INFO: Pod "downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033100595s STEP: Saw pod success Mar 8 15:56:59.027: INFO: Pod "downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67" satisfied condition "success or failure" Mar 8 15:56:59.031: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67 container client-container: STEP: delete the pod Mar 8 15:56:59.084: INFO: Waiting for pod downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67 to disappear Mar 8 15:56:59.089: INFO: Pod downwardapi-volume-fbb1b21e-7e20-4feb-a917-90c5feaf6d67 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:56:59.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7439" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":2930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:56:59.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 15:56:59.193: INFO: Waiting up to 5m0s for pod "downward-api-b708aec4-50a4-42af-a88f-ed8135c26712" in namespace "downward-api-9023" to be "success or failure" Mar 8 15:56:59.197: INFO: Pod "downward-api-b708aec4-50a4-42af-a88f-ed8135c26712": Phase="Pending", Reason="", readiness=false. Elapsed: 3.825718ms Mar 8 15:57:01.201: INFO: Pod "downward-api-b708aec4-50a4-42af-a88f-ed8135c26712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00773502s STEP: Saw pod success Mar 8 15:57:01.201: INFO: Pod "downward-api-b708aec4-50a4-42af-a88f-ed8135c26712" satisfied condition "success or failure" Mar 8 15:57:01.204: INFO: Trying to get logs from node latest-worker pod downward-api-b708aec4-50a4-42af-a88f-ed8135c26712 container dapi-container: STEP: delete the pod Mar 8 15:57:01.241: INFO: Waiting for pod downward-api-b708aec4-50a4-42af-a88f-ed8135c26712 to disappear Mar 8 15:57:01.251: INFO: Pod downward-api-b708aec4-50a4-42af-a88f-ed8135c26712 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:57:01.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9023" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":197,"skipped":3030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:57:01.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-c9c0c343-875a-43a2-a4d0-a682192a9928 STEP: Creating secret with name s-test-opt-upd-5a32f45f-6b5e-428b-b878-3847416e92c8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c9c0c343-875a-43a2-a4d0-a682192a9928 STEP: Updating secret s-test-opt-upd-5a32f45f-6b5e-428b-b878-3847416e92c8 STEP: Creating secret with name s-test-opt-create-3f49ef62-b5c2-4ae5-921d-ad2084449b4e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:58:13.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4412" for this suite. • [SLOW TEST:72.593 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":198,"skipped":3088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:58:13.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 15:58:14.447: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 15:58:17.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 8 15:58:19.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config attach --namespace=webhook-7947 to-be-attached-pod -i -c=container1' Mar 8 15:58:19.639: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:58:19.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7947" for this suite. STEP: Destroying namespace "webhook-7947-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.889 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":199,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:58:19.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Mar 8 15:58:19.793: INFO: Waiting up to 5m0s for pod "var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d" in namespace "var-expansion-7782" to be "success or failure" Mar 8 15:58:19.797: INFO: Pod "var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384673ms Mar 8 15:58:21.801: INFO: Pod "var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00869298s Mar 8 15:58:23.806: INFO: Pod "var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013072017s STEP: Saw pod success Mar 8 15:58:23.806: INFO: Pod "var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d" satisfied condition "success or failure" Mar 8 15:58:23.809: INFO: Trying to get logs from node latest-worker pod var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d container dapi-container: STEP: delete the pod Mar 8 15:58:23.849: INFO: Waiting for pod var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d to disappear Mar 8 15:58:23.857: INFO: Pod var-expansion-c8ab8100-f55c-437d-ae13-703f5481bb4d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:58:23.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7782" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:58:23.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 15:58:24.411: INFO: Pod name wrapped-volume-race-e0c5403b-e8fe-4609-a312-8108cc65fadd: Found 0 pods out of 5 Mar 8 15:58:29.417: INFO: Pod name wrapped-volume-race-e0c5403b-e8fe-4609-a312-8108cc65fadd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e0c5403b-e8fe-4609-a312-8108cc65fadd in namespace emptydir-wrapper-6246, will wait for the garbage collector to delete the pods Mar 8 15:58:39.512: INFO: Deleting ReplicationController wrapped-volume-race-e0c5403b-e8fe-4609-a312-8108cc65fadd took: 20.859643ms Mar 8 15:58:39.812: INFO: Terminating ReplicationController wrapped-volume-race-e0c5403b-e8fe-4609-a312-8108cc65fadd pods took: 300.240856ms STEP: Creating RC which spawns configmap-volume pods Mar 8 15:58:53.180: INFO: Pod name wrapped-volume-race-dd7407ec-73f9-4cd9-82be-aa067cc57bbe: Found 0 pods out of 5 Mar 8 15:58:58.186: INFO: Pod name wrapped-volume-race-dd7407ec-73f9-4cd9-82be-aa067cc57bbe: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dd7407ec-73f9-4cd9-82be-aa067cc57bbe in namespace emptydir-wrapper-6246, will wait for the garbage collector to delete the pods Mar 8 15:59:10.308: INFO: Deleting ReplicationController wrapped-volume-race-dd7407ec-73f9-4cd9-82be-aa067cc57bbe took: 49.504497ms Mar 8 15:59:10.608: INFO: Terminating ReplicationController wrapped-volume-race-dd7407ec-73f9-4cd9-82be-aa067cc57bbe pods took: 300.232971ms STEP: Creating RC which spawns configmap-volume pods Mar 8 15:59:15.754: INFO: Pod name wrapped-volume-race-6a298fa0-23e4-4a76-bf14-c3980ee41f5d: Found 0 pods out of 5 Mar 8 15:59:20.761: INFO: Pod name wrapped-volume-race-6a298fa0-23e4-4a76-bf14-c3980ee41f5d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6a298fa0-23e4-4a76-bf14-c3980ee41f5d in namespace emptydir-wrapper-6246, will wait for the garbage collector to delete the pods Mar 8 15:59:30.869: INFO: Deleting ReplicationController wrapped-volume-race-6a298fa0-23e4-4a76-bf14-c3980ee41f5d took: 35.017349ms Mar 8 15:59:31.169: INFO: Terminating ReplicationController wrapped-volume-race-6a298fa0-23e4-4a76-bf14-c3980ee41f5d pods took: 300.287049ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:59:37.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6246" for this suite. • [SLOW TEST:73.722 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":201,"skipped":3187,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:59:37.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:59:37.666: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-588bcd4c-feff-4c30-84c8-26e0b31fecc7" in namespace "security-context-test-7734" to be "success or failure" Mar 8 15:59:37.670: INFO: Pod "busybox-privileged-false-588bcd4c-feff-4c30-84c8-26e0b31fecc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282016ms Mar 8 15:59:39.696: INFO: Pod "busybox-privileged-false-588bcd4c-feff-4c30-84c8-26e0b31fecc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029331315s Mar 8 15:59:39.696: INFO: Pod "busybox-privileged-false-588bcd4c-feff-4c30-84c8-26e0b31fecc7" satisfied condition "success or failure" Mar 8 15:59:39.706: INFO: Got logs for pod "busybox-privileged-false-588bcd4c-feff-4c30-84c8-26e0b31fecc7": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:59:39.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7734" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3188,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:59:39.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 8 15:59:39.765: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:59:53.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3009" for this suite. • [SLOW TEST:13.294 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":203,"skipped":3192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:59:53.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:59:53.056: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:59:54.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6669" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":204,"skipped":3224,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:59:54.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 15:59:54.153: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 15:59:58.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3177" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3225,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 15:59:58.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-9gfs STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:59:58.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9gfs" in namespace "subpath-2499" to be "success or failure" Mar 8 15:59:58.306: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Pending", Reason="", readiness=false. Elapsed: 21.821152ms Mar 8 16:00:00.310: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025550983s Mar 8 16:00:02.313: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 4.029439138s Mar 8 16:00:04.318: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 6.033689762s Mar 8 16:00:06.322: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 8.037953783s Mar 8 16:00:08.326: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 10.041903561s Mar 8 16:00:10.330: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 12.045508418s Mar 8 16:00:12.334: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 14.04959726s Mar 8 16:00:14.338: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 16.053964203s Mar 8 16:00:16.342: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 18.05781723s Mar 8 16:00:18.346: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Running", Reason="", readiness=true. Elapsed: 20.062462259s Mar 8 16:00:20.355: INFO: Pod "pod-subpath-test-configmap-9gfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.070626109s STEP: Saw pod success Mar 8 16:00:20.355: INFO: Pod "pod-subpath-test-configmap-9gfs" satisfied condition "success or failure" Mar 8 16:00:20.357: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-9gfs container test-container-subpath-configmap-9gfs: STEP: delete the pod Mar 8 16:00:20.384: INFO: Waiting for pod pod-subpath-test-configmap-9gfs to disappear Mar 8 16:00:20.386: INFO: Pod pod-subpath-test-configmap-9gfs no longer exists STEP: Deleting pod pod-subpath-test-configmap-9gfs Mar 8 16:00:20.386: INFO: Deleting pod "pod-subpath-test-configmap-9gfs" in namespace "subpath-2499" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:00:20.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2499" for this suite. • [SLOW TEST:22.193 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":206,"skipped":3237,"failed":0} [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:00:20.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 16:00:20.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2599' Mar 8 16:00:20.524: INFO: stderr: "" Mar 8 16:00:20.524: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Mar 8 16:00:20.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2599' Mar 8 16:00:32.487: INFO: stderr: "" Mar 8 16:00:32.487: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:00:32.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2599" for this suite. • [SLOW TEST:12.099 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":207,"skipped":3237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:00:32.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 16:00:32.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca" in namespace "downward-api-9062" to be "success or failure" Mar 8 16:00:32.590: INFO: Pod "downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.786887ms Mar 8 16:00:34.594: INFO: Pod "downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018664133s STEP: Saw pod success Mar 8 16:00:34.594: INFO: Pod "downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca" satisfied condition "success or failure" Mar 8 16:00:34.596: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca container client-container: STEP: delete the pod Mar 8 16:00:34.621: INFO: Waiting for pod downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca to disappear Mar 8 16:00:34.625: INFO: Pod downwardapi-volume-54979b56-b1f9-4d6c-9536-f0faa95f8dca no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:00:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9062" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":208,"skipped":3278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:00:34.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:00:34.670: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 8 16:00:36.709: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:00:37.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6565" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":209,"skipped":3302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:00:37.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 8 16:00:41.865: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6552 PodName:pod-sharedvolume-af8e8c04-526a-48c9-89c5-25c59b7ef7bb ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:00:41.865: INFO: >>> kubeConfig: /root/.kube/config I0308 16:00:41.898420 7 log.go:172] (0xc000fdebb0) (0xc0012e6820) Create stream I0308 16:00:41.898458 7 log.go:172] (0xc000fdebb0) (0xc0012e6820) Stream added, broadcasting: 1 I0308 16:00:41.905401 7 log.go:172] (0xc000fdebb0) Reply frame received for 1 I0308 16:00:41.905441 7 log.go:172] (0xc000fdebb0) (0xc001f4d540) Create stream I0308 16:00:41.905453 7 log.go:172] (0xc000fdebb0) (0xc001f4d540) Stream added, broadcasting: 3 I0308 16:00:41.906427 7 log.go:172] (0xc000fdebb0) Reply frame received for 3 I0308 16:00:41.906490 7 log.go:172] (0xc000fdebb0) (0xc002975ae0) Create stream I0308 16:00:41.906507 7 log.go:172] (0xc000fdebb0) (0xc002975ae0) Stream added, broadcasting: 5 I0308 16:00:41.907636 7 log.go:172] (0xc000fdebb0) Reply frame received for 5 I0308 16:00:41.957454 7 log.go:172] (0xc000fdebb0) Data frame received for 3 I0308 16:00:41.957482 7 log.go:172] (0xc001f4d540) (3) Data frame handling I0308 16:00:41.957502 7 log.go:172] (0xc001f4d540) (3) Data frame sent I0308 16:00:41.957510 7 log.go:172] (0xc000fdebb0) Data frame received for 3 I0308 16:00:41.957517 7 log.go:172] (0xc001f4d540) (3) Data frame handling I0308 16:00:41.957634 7 log.go:172] (0xc000fdebb0) Data frame received for 5 I0308 16:00:41.957649 7 log.go:172] (0xc002975ae0) (5) Data frame handling I0308 16:00:41.959463 7 log.go:172] (0xc000fdebb0) Data frame received for 1 I0308 16:00:41.959495 7 log.go:172] (0xc0012e6820) (1) Data frame handling I0308 16:00:41.959514 7 log.go:172] (0xc0012e6820) (1) Data frame sent I0308 16:00:41.959527 7 log.go:172] (0xc000fdebb0) (0xc0012e6820) Stream removed, broadcasting: 1 I0308 16:00:41.959546 7 log.go:172] (0xc000fdebb0) Go away received I0308 16:00:41.959696 7 log.go:172] (0xc000fdebb0) (0xc0012e6820) Stream removed, broadcasting: 1 I0308 16:00:41.959737 7 log.go:172] (0xc000fdebb0) (0xc001f4d540) Stream removed, broadcasting: 3 I0308 16:00:41.959751 7 log.go:172] (0xc000fdebb0) (0xc002975ae0) Stream removed, broadcasting: 5 Mar 8 16:00:41.959: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:00:41.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6552" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":210,"skipped":3335,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:00:41.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-d62x STEP: Creating a pod to test atomic-volume-subpath Mar 8 16:00:42.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d62x" in namespace "subpath-5276" to be "success or failure" Mar 8 16:00:42.068: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Pending", Reason="", readiness=false. Elapsed: 28.829114ms Mar 8 16:00:44.072: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 2.03269293s Mar 8 16:00:46.076: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 4.036720322s Mar 8 16:00:48.080: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 6.040442268s Mar 8 16:00:50.084: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 8.04457696s Mar 8 16:00:52.087: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 10.047764516s Mar 8 16:00:54.090: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 12.051080031s Mar 8 16:00:56.100: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 14.060462201s Mar 8 16:00:58.103: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 16.063824023s Mar 8 16:01:00.106: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 18.066924277s Mar 8 16:01:02.110: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Running", Reason="", readiness=true. Elapsed: 20.070413038s Mar 8 16:01:04.152: INFO: Pod "pod-subpath-test-projected-d62x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.112324037s STEP: Saw pod success Mar 8 16:01:04.152: INFO: Pod "pod-subpath-test-projected-d62x" satisfied condition "success or failure" Mar 8 16:01:04.154: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-d62x container test-container-subpath-projected-d62x: STEP: delete the pod Mar 8 16:01:04.173: INFO: Waiting for pod pod-subpath-test-projected-d62x to disappear Mar 8 16:01:04.177: INFO: Pod pod-subpath-test-projected-d62x no longer exists STEP: Deleting pod pod-subpath-test-projected-d62x Mar 8 16:01:04.177: INFO: Deleting pod "pod-subpath-test-projected-d62x" in namespace "subpath-5276" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:01:04.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5276" for this suite. • [SLOW TEST:22.203 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":211,"skipped":3352,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:01:04.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4216 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4216 STEP: Creating statefulset with conflicting port in namespace statefulset-4216 STEP: Waiting until pod test-pod will start running in namespace statefulset-4216 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4216 Mar 8 16:01:08.288: INFO: Observed stateful pod in namespace: statefulset-4216, name: ss-0, uid: acfbbbf6-d0f5-450e-b3d7-2624d1fece3c, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 16:01:08.455: INFO: Observed stateful pod in namespace: statefulset-4216, name: ss-0, uid: acfbbbf6-d0f5-450e-b3d7-2624d1fece3c, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 16:01:08.461: INFO: Observed stateful pod in namespace: statefulset-4216, name: ss-0, uid: acfbbbf6-d0f5-450e-b3d7-2624d1fece3c, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 16:01:08.466: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4216 STEP: Removing pod with conflicting port in namespace statefulset-4216 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4216 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 16:01:12.569: INFO: Deleting all statefulset in ns statefulset-4216 Mar 8 16:01:12.571: INFO: Scaling statefulset ss to 0 Mar 8 16:01:22.594: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 16:01:22.596: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:01:22.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4216" for this suite. • [SLOW TEST:18.434 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":212,"skipped":3359,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:01:22.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 8 16:01:22.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3927' Mar 8 16:01:23.242: INFO: stderr: "" Mar 8 16:01:23.243: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 16:01:23.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:23.355: INFO: stderr: "" Mar 8 16:01:23.355: INFO: stdout: "update-demo-nautilus-k8pwt update-demo-nautilus-vg9fs " Mar 8 16:01:23.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8pwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:23.426: INFO: stderr: "" Mar 8 16:01:23.426: INFO: stdout: "" Mar 8 16:01:23.426: INFO: update-demo-nautilus-k8pwt is created but not running Mar 8 16:01:28.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:28.524: INFO: stderr: "" Mar 8 16:01:28.524: INFO: stdout: "update-demo-nautilus-k8pwt update-demo-nautilus-vg9fs " Mar 8 16:01:28.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8pwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:28.614: INFO: stderr: "" Mar 8 16:01:28.614: INFO: stdout: "true" Mar 8 16:01:28.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8pwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:28.696: INFO: stderr: "" Mar 8 16:01:28.696: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 16:01:28.696: INFO: validating pod update-demo-nautilus-k8pwt Mar 8 16:01:28.699: INFO: got data: { "image": "nautilus.jpg" } Mar 8 16:01:28.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 16:01:28.699: INFO: update-demo-nautilus-k8pwt is verified up and running Mar 8 16:01:28.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:28.780: INFO: stderr: "" Mar 8 16:01:28.780: INFO: stdout: "true" Mar 8 16:01:28.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:28.848: INFO: stderr: "" Mar 8 16:01:28.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 16:01:28.848: INFO: validating pod update-demo-nautilus-vg9fs Mar 8 16:01:28.851: INFO: got data: { "image": "nautilus.jpg" } Mar 8 16:01:28.851: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 16:01:28.851: INFO: update-demo-nautilus-vg9fs is verified up and running STEP: scaling down the replication controller Mar 8 16:01:28.853: INFO: scanned /root for discovery docs: Mar 8 16:01:28.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3927' Mar 8 16:01:29.940: INFO: stderr: "" Mar 8 16:01:29.940: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 16:01:29.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:30.062: INFO: stderr: "" Mar 8 16:01:30.062: INFO: stdout: "update-demo-nautilus-k8pwt update-demo-nautilus-vg9fs " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 16:01:35.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:35.184: INFO: stderr: "" Mar 8 16:01:35.184: INFO: stdout: "update-demo-nautilus-vg9fs " Mar 8 16:01:35.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:35.292: INFO: stderr: "" Mar 8 16:01:35.292: INFO: stdout: "true" Mar 8 16:01:35.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:35.378: INFO: stderr: "" Mar 8 16:01:35.378: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 16:01:35.378: INFO: validating pod update-demo-nautilus-vg9fs Mar 8 16:01:35.381: INFO: got data: { "image": "nautilus.jpg" } Mar 8 16:01:35.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 16:01:35.381: INFO: update-demo-nautilus-vg9fs is verified up and running STEP: scaling up the replication controller Mar 8 16:01:35.384: INFO: scanned /root for discovery docs: Mar 8 16:01:35.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3927' Mar 8 16:01:36.492: INFO: stderr: "" Mar 8 16:01:36.493: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 16:01:36.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:36.591: INFO: stderr: "" Mar 8 16:01:36.591: INFO: stdout: "update-demo-nautilus-k4pqn update-demo-nautilus-vg9fs " Mar 8 16:01:36.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4pqn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:36.680: INFO: stderr: "" Mar 8 16:01:36.680: INFO: stdout: "" Mar 8 16:01:36.680: INFO: update-demo-nautilus-k4pqn is created but not running Mar 8 16:01:41.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3927' Mar 8 16:01:41.812: INFO: stderr: "" Mar 8 16:01:41.813: INFO: stdout: "update-demo-nautilus-k4pqn update-demo-nautilus-vg9fs " Mar 8 16:01:41.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4pqn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:41.927: INFO: stderr: "" Mar 8 16:01:41.927: INFO: stdout: "true" Mar 8 16:01:41.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4pqn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:42.018: INFO: stderr: "" Mar 8 16:01:42.018: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 16:01:42.018: INFO: validating pod update-demo-nautilus-k4pqn Mar 8 16:01:42.021: INFO: got data: { "image": "nautilus.jpg" } Mar 8 16:01:42.021: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 16:01:42.021: INFO: update-demo-nautilus-k4pqn is verified up and running Mar 8 16:01:42.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:42.094: INFO: stderr: "" Mar 8 16:01:42.094: INFO: stdout: "true" Mar 8 16:01:42.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vg9fs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3927' Mar 8 16:01:42.171: INFO: stderr: "" Mar 8 16:01:42.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 16:01:42.171: INFO: validating pod update-demo-nautilus-vg9fs Mar 8 16:01:42.173: INFO: got data: { "image": "nautilus.jpg" } Mar 8 16:01:42.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 16:01:42.173: INFO: update-demo-nautilus-vg9fs is verified up and running STEP: using delete to clean up resources Mar 8 16:01:42.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3927' Mar 8 16:01:42.240: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 16:01:42.240: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 16:01:42.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3927' Mar 8 16:01:42.310: INFO: stderr: "No resources found in kubectl-3927 namespace.\n" Mar 8 16:01:42.310: INFO: stdout: "" Mar 8 16:01:42.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3927 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 16:01:42.379: INFO: stderr: "" Mar 8 16:01:42.379: INFO: stdout: "update-demo-nautilus-k4pqn\nupdate-demo-nautilus-vg9fs\n" Mar 8 16:01:42.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3927' Mar 8 16:01:42.959: INFO: stderr: "No resources found in kubectl-3927 namespace.\n" Mar 8 16:01:42.959: INFO: stdout: "" Mar 8 16:01:42.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3927 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 16:01:43.036: INFO: stderr: "" Mar 8 16:01:43.036: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:01:43.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3927" for this suite. • [SLOW TEST:20.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":280,"completed":213,"skipped":3368,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:01:43.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-6a136158-3155-49f1-8873-ef7f1bba6a3c STEP: Creating a pod to test consume secrets Mar 8 16:01:43.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298" in namespace "projected-9129" to be "success or failure" Mar 8 16:01:43.205: INFO: Pod "pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 19.433838ms Mar 8 16:01:45.209: INFO: Pod "pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023548618s STEP: Saw pod success Mar 8 16:01:45.209: INFO: Pod "pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298" satisfied condition "success or failure" Mar 8 16:01:45.212: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298 container secret-volume-test: STEP: delete the pod Mar 8 16:01:45.254: INFO: Waiting for pod pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298 to disappear Mar 8 16:01:45.257: INFO: Pod pod-projected-secrets-583592bf-28e4-443c-a1c9-8d82166a0298 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:01:45.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9129" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3379,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:01:45.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-6469/configmap-test-96d9af9f-1352-4f60-b1b6-0530d9271545 STEP: Creating a pod to test consume configMaps Mar 8 16:01:45.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79" in namespace "configmap-6469" to be "success or failure" Mar 8 16:01:45.336: INFO: Pod "pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474004ms Mar 8 16:01:47.339: INFO: Pod "pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79": Phase="Running", Reason="", readiness=true. Elapsed: 2.00606901s Mar 8 16:01:49.343: INFO: Pod "pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009975443s STEP: Saw pod success Mar 8 16:01:49.343: INFO: Pod "pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79" satisfied condition "success or failure" Mar 8 16:01:49.346: INFO: Trying to get logs from node latest-worker pod pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79 container env-test: STEP: delete the pod Mar 8 16:01:49.364: INFO: Waiting for pod pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79 to disappear Mar 8 16:01:49.390: INFO: Pod pod-configmaps-15b7070d-0a5c-4271-bbe7-eb1c12391f79 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:01:49.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6469" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3392,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:01:49.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-5a889e5d-6af0-4ab4-a354-7392b935915e in namespace container-probe-2696 Mar 8 16:01:51.539: INFO: Started pod test-webserver-5a889e5d-6af0-4ab4-a354-7392b935915e in namespace container-probe-2696 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 16:01:51.548: INFO: Initial restart count of pod test-webserver-5a889e5d-6af0-4ab4-a354-7392b935915e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:05:52.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2696" for this suite. • [SLOW TEST:243.300 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3393,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:05:52.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 16:05:52.770: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 16:05:52.819: INFO: Waiting for terminating namespaces to be deleted... Mar 8 16:05:52.822: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 16:05:52.835: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 16:05:52.835: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 16:05:52.835: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 16:05:52.835: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 16:05:52.835: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 16:05:52.862: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 16:05:52.862: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 16:05:52.862: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 16:05:52.862: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 16:05:52.862: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 16:05:52.862: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-54ec2ad2-a629-4d0e-8d22-bf9ac02534d1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-54ec2ad2-a629-4d0e-8d22-bf9ac02534d1 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-54ec2ad2-a629-4d0e-8d22-bf9ac02534d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:06:01.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3734" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:8.383 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":217,"skipped":3407,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:06:01.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:06:01.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 8 16:06:01.756: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:01Z generation:1 name:name1 resourceVersion:27654 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:595e2be1-2a74-420e-85f9-25e8c8794a67] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 8 16:06:11.761: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:11Z generation:1 name:name2 resourceVersion:27712 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:35f8f3fa-c688-4f64-bd64-2e40223f5d97] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 8 16:06:21.767: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:01Z generation:2 name:name1 resourceVersion:27750 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:595e2be1-2a74-420e-85f9-25e8c8794a67] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 8 16:06:31.774: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:11Z generation:2 name:name2 resourceVersion:27780 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:35f8f3fa-c688-4f64-bd64-2e40223f5d97] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 8 16:06:41.781: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:01Z generation:2 name:name1 resourceVersion:27808 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:595e2be1-2a74-420e-85f9-25e8c8794a67] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 8 16:06:51.823: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T16:06:11Z generation:2 name:name2 resourceVersion:27838 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:35f8f3fa-c688-4f64-bd64-2e40223f5d97] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:02.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-544" for this suite. • [SLOW TEST:61.255 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":218,"skipped":3424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:02.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:02.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4969" for this suite. STEP: Destroying namespace "nspatchtest-ff4fff89-fce5-485e-9147-dea8620a9ee6-3498" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":219,"skipped":3450,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:02.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 16:07:02.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5438' Mar 8 16:07:04.797: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 16:07:04.797: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740 Mar 8 16:07:06.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5438' Mar 8 16:07:06.960: INFO: stderr: "" Mar 8 16:07:06.960: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:06.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5438" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":280,"completed":220,"skipped":3459,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:06.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-79caf7c8-fbd3-44d8-8833-5096d40ae907 STEP: Creating a pod to test consume configMaps Mar 8 16:07:07.116: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5" in namespace "projected-2956" to be "success or failure" Mar 8 16:07:07.127: INFO: Pod "pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.932856ms Mar 8 16:07:09.138: INFO: Pod "pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022795728s STEP: Saw pod success Mar 8 16:07:09.138: INFO: Pod "pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5" satisfied condition "success or failure" Mar 8 16:07:09.142: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5 container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:07:09.182: INFO: Waiting for pod pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5 to disappear Mar 8 16:07:09.186: INFO: Pod pod-projected-configmaps-3c618479-c817-4be7-849e-e7ed4df6f2c5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:09.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2956" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":221,"skipped":3480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:09.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 16:07:09.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81" in namespace "downward-api-5151" to be "success or failure" Mar 8 16:07:09.283: INFO: Pod "downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81": Phase="Pending", Reason="", readiness=false. Elapsed: 34.18802ms Mar 8 16:07:11.287: INFO: Pod "downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038360432s STEP: Saw pod success Mar 8 16:07:11.287: INFO: Pod "downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81" satisfied condition "success or failure" Mar 8 16:07:11.289: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81 container client-container: STEP: delete the pod Mar 8 16:07:11.338: INFO: Waiting for pod downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81 to disappear Mar 8 16:07:11.343: INFO: Pod downwardapi-volume-8fa6692f-2cb3-4d88-b439-535398a93a81 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:11.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5151" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3508,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:11.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5945" for this suite. • [SLOW TEST:8.076 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":223,"skipped":3514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:19.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 16:07:19.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b" in namespace "downward-api-33" to be "success or failure" Mar 8 16:07:19.520: INFO: Pod "downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.130934ms Mar 8 16:07:21.524: INFO: Pod "downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016055256s STEP: Saw pod success Mar 8 16:07:21.524: INFO: Pod "downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b" satisfied condition "success or failure" Mar 8 16:07:21.528: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b container client-container: STEP: delete the pod Mar 8 16:07:21.555: INFO: Waiting for pod downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b to disappear Mar 8 16:07:21.559: INFO: Pod downwardapi-volume-6251e751-a1e6-48ff-8c84-483397a8426b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:21.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-33" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":224,"skipped":3545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:21.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 16:07:21.632: INFO: Waiting up to 5m0s for pod "downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c" in namespace "downward-api-4388" to be "success or failure" Mar 8 16:07:21.637: INFO: Pod "downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.97939ms Mar 8 16:07:23.661: INFO: Pod "downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028710354s STEP: Saw pod success Mar 8 16:07:23.661: INFO: Pod "downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c" satisfied condition "success or failure" Mar 8 16:07:23.664: INFO: Trying to get logs from node latest-worker2 pod downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c container dapi-container: STEP: delete the pod Mar 8 16:07:23.697: INFO: Waiting for pod downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c to disappear Mar 8 16:07:23.703: INFO: Pod downward-api-b87a3632-8011-4621-b3cc-8ac7db24517c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:23.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4388" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":225,"skipped":3584,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:23.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-1794 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 16:07:23.805: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 16:07:23.889: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 16:07:25.926: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 16:07:27.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:30.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:31.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:33.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:35.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:37.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:39.893: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 16:07:41.893: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 16:07:41.899: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 16:07:45.949: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.6 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1794 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:07:45.949: INFO: >>> kubeConfig: /root/.kube/config I0308 16:07:45.988007 7 log.go:172] (0xc002a693f0) (0xc0017bab40) Create stream I0308 16:07:45.988040 7 log.go:172] (0xc002a693f0) (0xc0017bab40) Stream added, broadcasting: 1 I0308 16:07:45.990585 7 log.go:172] (0xc002a693f0) Reply frame received for 1 I0308 16:07:45.990623 7 log.go:172] (0xc002a693f0) (0xc0027b2be0) Create stream I0308 16:07:45.990636 7 log.go:172] (0xc002a693f0) (0xc0027b2be0) Stream added, broadcasting: 3 I0308 16:07:45.991484 7 log.go:172] (0xc002a693f0) Reply frame received for 3 I0308 16:07:45.991519 7 log.go:172] (0xc002a693f0) (0xc001a12000) Create stream I0308 16:07:45.991534 7 log.go:172] (0xc002a693f0) (0xc001a12000) Stream added, broadcasting: 5 I0308 16:07:45.993218 7 log.go:172] (0xc002a693f0) Reply frame received for 5 I0308 16:07:47.055502 7 log.go:172] (0xc002a693f0) Data frame received for 3 I0308 16:07:47.055545 7 log.go:172] (0xc0027b2be0) (3) Data frame handling I0308 16:07:47.055575 7 log.go:172] (0xc0027b2be0) (3) Data frame sent I0308 16:07:47.055595 7 log.go:172] (0xc002a693f0) Data frame received for 3 I0308 16:07:47.055607 7 log.go:172] (0xc0027b2be0) (3) Data frame handling I0308 16:07:47.055632 7 log.go:172] (0xc002a693f0) Data frame received for 5 I0308 16:07:47.055658 7 log.go:172] (0xc001a12000) (5) Data frame handling I0308 16:07:47.057937 7 log.go:172] (0xc002a693f0) Data frame received for 1 I0308 16:07:47.057967 7 log.go:172] (0xc0017bab40) (1) Data frame handling I0308 16:07:47.057984 7 log.go:172] (0xc0017bab40) (1) Data frame sent I0308 16:07:47.058013 7 log.go:172] (0xc002a693f0) (0xc0017bab40) Stream removed, broadcasting: 1 I0308 16:07:47.058062 7 log.go:172] (0xc002a693f0) Go away received I0308 16:07:47.058187 7 log.go:172] (0xc002a693f0) (0xc0017bab40) Stream removed, broadcasting: 1 I0308 16:07:47.058231 7 log.go:172] (0xc002a693f0) (0xc0027b2be0) Stream removed, broadcasting: 3 I0308 16:07:47.058246 7 log.go:172] (0xc002a693f0) (0xc001a12000) Stream removed, broadcasting: 5 Mar 8 16:07:47.058: INFO: Found all expected endpoints: [netserver-0] Mar 8 16:07:47.061: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.104 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1794 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:07:47.061: INFO: >>> kubeConfig: /root/.kube/config I0308 16:07:47.095242 7 log.go:172] (0xc002a69a20) (0xc0017bb040) Create stream I0308 16:07:47.095266 7 log.go:172] (0xc002a69a20) (0xc0017bb040) Stream added, broadcasting: 1 I0308 16:07:47.098302 7 log.go:172] (0xc002a69a20) Reply frame received for 1 I0308 16:07:47.098354 7 log.go:172] (0xc002a69a20) (0xc0027b2c80) Create stream I0308 16:07:47.098370 7 log.go:172] (0xc002a69a20) (0xc0027b2c80) Stream added, broadcasting: 3 I0308 16:07:47.099339 7 log.go:172] (0xc002a69a20) Reply frame received for 3 I0308 16:07:47.099372 7 log.go:172] (0xc002a69a20) (0xc0027b2d20) Create stream I0308 16:07:47.099406 7 log.go:172] (0xc002a69a20) (0xc0027b2d20) Stream added, broadcasting: 5 I0308 16:07:47.100426 7 log.go:172] (0xc002a69a20) Reply frame received for 5 I0308 16:07:48.154212 7 log.go:172] (0xc002a69a20) Data frame received for 3 I0308 16:07:48.154265 7 log.go:172] (0xc0027b2c80) (3) Data frame handling I0308 16:07:48.154288 7 log.go:172] (0xc0027b2c80) (3) Data frame sent I0308 16:07:48.154298 7 log.go:172] (0xc002a69a20) Data frame received for 3 I0308 16:07:48.154315 7 log.go:172] (0xc0027b2c80) (3) Data frame handling I0308 16:07:48.154372 7 log.go:172] (0xc002a69a20) Data frame received for 5 I0308 16:07:48.154424 7 log.go:172] (0xc0027b2d20) (5) Data frame handling I0308 16:07:48.156387 7 log.go:172] (0xc002a69a20) Data frame received for 1 I0308 16:07:48.156412 7 log.go:172] (0xc0017bb040) (1) Data frame handling I0308 16:07:48.156433 7 log.go:172] (0xc0017bb040) (1) Data frame sent I0308 16:07:48.156453 7 log.go:172] (0xc002a69a20) (0xc0017bb040) Stream removed, broadcasting: 1 I0308 16:07:48.156478 7 log.go:172] (0xc002a69a20) Go away received I0308 16:07:48.156546 7 log.go:172] (0xc002a69a20) (0xc0017bb040) Stream removed, broadcasting: 1 I0308 16:07:48.156574 7 log.go:172] (0xc002a69a20) (0xc0027b2c80) Stream removed, broadcasting: 3 I0308 16:07:48.156590 7 log.go:172] (0xc002a69a20) (0xc0027b2d20) Stream removed, broadcasting: 5 Mar 8 16:07:48.156: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:48.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1794" for this suite. • [SLOW TEST:24.455 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":226,"skipped":3605,"failed":0} [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:48.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-c0aaa2ad-8a23-4879-bdb9-c5888a13febd STEP: Creating a pod to test consume secrets Mar 8 16:07:48.305: INFO: Waiting up to 5m0s for pod "pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d" in namespace "secrets-6402" to be "success or failure" Mar 8 16:07:48.315: INFO: Pod "pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.339331ms Mar 8 16:07:50.324: INFO: Pod "pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018364644s STEP: Saw pod success Mar 8 16:07:50.324: INFO: Pod "pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d" satisfied condition "success or failure" Mar 8 16:07:50.326: INFO: Trying to get logs from node latest-worker pod pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d container secret-volume-test: STEP: delete the pod Mar 8 16:07:50.346: INFO: Waiting for pod pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d to disappear Mar 8 16:07:50.350: INFO: Pod pod-secrets-9521f053-9375-49e1-ac97-85bd10e27f5d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:50.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6402" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":227,"skipped":3605,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:50.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:07:50.424: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 16:07:53.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6991 create -f -' Mar 8 16:07:55.789: INFO: stderr: "" Mar 8 16:07:55.789: INFO: stdout: "e2e-test-crd-publish-openapi-6126-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 16:07:55.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6991 delete e2e-test-crd-publish-openapi-6126-crds test-cr' Mar 8 16:07:55.903: INFO: stderr: "" Mar 8 16:07:55.903: INFO: stdout: "e2e-test-crd-publish-openapi-6126-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 8 16:07:55.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6991 apply -f -' Mar 8 16:07:56.187: INFO: stderr: "" Mar 8 16:07:56.187: INFO: stdout: "e2e-test-crd-publish-openapi-6126-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 16:07:56.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6991 delete e2e-test-crd-publish-openapi-6126-crds test-cr' Mar 8 16:07:56.279: INFO: stderr: "" Mar 8 16:07:56.279: INFO: stdout: "e2e-test-crd-publish-openapi-6126-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 8 16:07:56.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6126-crds' Mar 8 16:07:56.505: INFO: stderr: "" Mar 8 16:07:56.505: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6126-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:07:58.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6991" for this suite. • [SLOW TEST:7.917 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":228,"skipped":3617,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:07:58.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:07:58.354: INFO: Create a RollingUpdate DaemonSet Mar 8 16:07:58.357: INFO: Check that daemon pods launch on every node of the cluster Mar 8 16:07:58.374: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:07:58.379: INFO: Number of nodes with available pods: 0 Mar 8 16:07:58.379: INFO: Node latest-worker is running more than one daemon pod Mar 8 16:07:59.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:07:59.386: INFO: Number of nodes with available pods: 0 Mar 8 16:07:59.386: INFO: Node latest-worker is running more than one daemon pod Mar 8 16:08:00.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:08:00.385: INFO: Number of nodes with available pods: 1 Mar 8 16:08:00.385: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 16:08:01.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:08:01.386: INFO: Number of nodes with available pods: 2 Mar 8 16:08:01.386: INFO: Number of running nodes: 2, number of available pods: 2 Mar 8 16:08:01.386: INFO: Update the DaemonSet to trigger a rollout Mar 8 16:08:01.393: INFO: Updating DaemonSet daemon-set Mar 8 16:08:13.414: INFO: Roll back the DaemonSet before rollout is complete Mar 8 16:08:13.420: INFO: Updating DaemonSet daemon-set Mar 8 16:08:13.420: INFO: Make sure DaemonSet rollback is complete Mar 8 16:08:13.440: INFO: Wrong image for pod: daemon-set-lsnfd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 16:08:13.440: INFO: Pod daemon-set-lsnfd is not available Mar 8 16:08:13.448: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:08:14.452: INFO: Wrong image for pod: daemon-set-lsnfd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 16:08:14.453: INFO: Pod daemon-set-lsnfd is not available Mar 8 16:08:14.456: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:08:15.451: INFO: Pod daemon-set-b5sxj is not available Mar 8 16:08:15.453: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6882, will wait for the garbage collector to delete the pods Mar 8 16:08:15.514: INFO: Deleting DaemonSet.extensions daemon-set took: 4.853656ms Mar 8 16:08:16.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.187658ms Mar 8 16:08:22.218: INFO: Number of nodes with available pods: 0 Mar 8 16:08:22.218: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 16:08:22.221: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6882/daemonsets","resourceVersion":"28560"},"items":null} Mar 8 16:08:22.223: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6882/pods","resourceVersion":"28560"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:22.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6882" for this suite. • [SLOW TEST:23.972 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":229,"skipped":3627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:22.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 8 16:08:24.850: INFO: Successfully updated pod "annotationupdate3ef6ec04-2559-4427-92cf-4d22d09857e7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:26.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3125" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":230,"skipped":3659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:26.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:08:26.934: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 16:08:26.943: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 16:08:31.946: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 16:08:31.946: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 16:08:31.950: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 16:08:31.959: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 16:08:33.966: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 16:08:33.969: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 16:08:33.977: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7309 /apis/apps/v1/namespaces/deployment-7309/deployments/test-rolling-update-deployment b21212a0-4553-474d-9f1a-8029a59dceac 28692 1 2020-03-08 16:08:31 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005532398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 16:08:32 +0000 UTC,LastTransitionTime:2020-03-08 16:08:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-08 16:08:33 +0000 UTC,LastTransitionTime:2020-03-08 16:08:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 16:08:33.981: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7309 /apis/apps/v1/namespaces/deployment-7309/replicasets/test-rolling-update-deployment-67cf4f6444 190c2a3d-4423-472b-8d54-97f32ead416a 28681 1 2020-03-08 16:08:31 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b21212a0-4553-474d-9f1a-8029a59dceac 0xc0055328d7 0xc0055328d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005532968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 16:08:33.981: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 16:08:33.981: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7309 /apis/apps/v1/namespaces/deployment-7309/replicasets/test-rolling-update-controller 87bc95c4-45ee-46c3-9b5b-c1f4894ec298 28690 2 2020-03-08 16:08:26 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b21212a0-4553-474d-9f1a-8029a59dceac 0xc0055327f7 0xc0055327f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005532858 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 16:08:33.984: INFO: Pod "test-rolling-update-deployment-67cf4f6444-c9xkb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-c9xkb test-rolling-update-deployment-67cf4f6444- deployment-7309 /api/v1/namespaces/deployment-7309/pods/test-rolling-update-deployment-67cf4f6444-c9xkb 9eb5496e-78de-4213-b4b8-864edeb35adf 28680 0 2020-03-08 16:08:31 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 190c2a3d-4423-472b-8d54-97f32ead416a 0xc005532e17 0xc005532e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cdh87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cdh87,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cdh87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:08:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:08:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.106,StartTime:2020-03-08 16:08:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:08:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://53fef6f5e4c9c61702467709f35f5e1b13b8e74219d6a45a5e93c6188491f2b8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:33.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7309" for this suite. • [SLOW TEST:7.119 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":231,"skipped":3699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:33.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Mar 8 16:08:34.070: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3966" to be "success or failure" Mar 8 16:08:34.083: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.23458ms Mar 8 16:08:36.086: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015511824s Mar 8 16:08:38.089: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018458577s STEP: Saw pod success Mar 8 16:08:38.089: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 8 16:08:38.091: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 8 16:08:38.127: INFO: Waiting for pod pod-host-path-test to disappear Mar 8 16:08:38.131: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:38.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3966" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:38.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-191 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-191 to expose endpoints map[] Mar 8 16:08:38.231: INFO: successfully validated that service multi-endpoint-test in namespace services-191 exposes endpoints map[] (10.287274ms elapsed) STEP: Creating pod pod1 in namespace services-191 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-191 to expose endpoints map[pod1:[100]] Mar 8 16:08:40.421: INFO: successfully validated that service multi-endpoint-test in namespace services-191 exposes endpoints map[pod1:[100]] (2.183480959s elapsed) STEP: Creating pod pod2 in namespace services-191 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-191 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 16:08:42.512: INFO: successfully validated that service multi-endpoint-test in namespace services-191 exposes endpoints map[pod1:[100] pod2:[101]] (2.086235733s elapsed) STEP: Deleting pod pod1 in namespace services-191 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-191 to expose endpoints map[pod2:[101]] Mar 8 16:08:43.570: INFO: successfully validated that service multi-endpoint-test in namespace services-191 exposes endpoints map[pod2:[101]] (1.035357894s elapsed) STEP: Deleting pod pod2 in namespace services-191 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-191 to expose endpoints map[] Mar 8 16:08:44.607: INFO: successfully validated that service multi-endpoint-test in namespace services-191 exposes endpoints map[] (1.032894999s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:44.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-191" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.498 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":233,"skipped":3755,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:44.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:46.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1982" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:47.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 16:08:47.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 16:08:50.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719280528, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719280528, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719280528, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719280527, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 16:08:53.056: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:08:53.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1714-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:08:54.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7616" for this suite. STEP: Destroying namespace "webhook-7616-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.240 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":235,"skipped":3800,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:08:54.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-68424e41-7054-4f40-8b5a-26638fc386b0 STEP: Creating configMap with name cm-test-opt-upd-6386f224-a2b0-4275-9e37-2644b10c52b2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-68424e41-7054-4f40-8b5a-26638fc386b0 STEP: Updating configmap cm-test-opt-upd-6386f224-a2b0-4275-9e37-2644b10c52b2 STEP: Creating configMap with name cm-test-opt-create-0c7090c5-8533-4a48-b203-cf0af704f7b6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:10:28.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7353" for this suite. • [SLOW TEST:94.635 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3811,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:10:28.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 16:10:33.002: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 16:10:33.006: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 16:10:35.007: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 16:10:35.029: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 16:10:37.007: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 16:10:37.010: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:10:37.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6191" for this suite. • [SLOW TEST:8.196 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:10:37.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:10:41.233: INFO: DNS probes using dns-test-87c41f69-3872-468e-90ec-491b8a6e198e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:10:45.309: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:45.312: INFO: File jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:45.312: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:10:50.315: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:50.317: INFO: File jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:50.317: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:10:55.316: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:55.319: INFO: File jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:10:55.319: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:11:00.315: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:11:00.317: INFO: File jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:11:00.317: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:11:05.316: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:11:05.319: INFO: File jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 16:11:05.319: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:11:10.316: INFO: File wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local from pod dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 contains '' instead of 'bar.example.com.' Mar 8 16:11:10.318: INFO: Lookups using dns-5688/dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 failed for: [wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local] Mar 8 16:11:15.318: INFO: DNS probes using dns-test-1a08c47e-a519-49f7-b3de-05943e03e674 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5688.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5688.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:11:19.424: INFO: DNS probes using dns-test-56775319-afb5-4747-b082-5d787d0ebfd7 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:11:19.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5688" for this suite. • [SLOW TEST:42.437 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":238,"skipped":3843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:11:19.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:11:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3275" for this suite. • [SLOW TEST:16.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":239,"skipped":3909,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:11:35.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 16:11:35.879: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5185 /api/v1/namespaces/watch-5185/configmaps/e2e-watch-test-resource-version 29570b35-26dc-4526-9d69-3e2a9ca24801 29696 0 2020-03-08 16:11:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:11:35.879: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5185 /api/v1/namespaces/watch-5185/configmaps/e2e-watch-test-resource-version 29570b35-26dc-4526-9d69-3e2a9ca24801 29697 0 2020-03-08 16:11:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:11:35.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5185" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":240,"skipped":3910,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:11:35.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-6142/secret-test-176fd283-446e-4795-be5f-864c44b6f7d7 STEP: Creating a pod to test consume secrets Mar 8 16:11:36.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53" in namespace "secrets-6142" to be "success or failure" Mar 8 16:11:36.097: INFO: Pod "pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.002281ms Mar 8 16:11:38.387: INFO: Pod "pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.297313132s STEP: Saw pod success Mar 8 16:11:38.387: INFO: Pod "pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53" satisfied condition "success or failure" Mar 8 16:11:38.389: INFO: Trying to get logs from node latest-worker pod pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53 container env-test: STEP: delete the pod Mar 8 16:11:38.424: INFO: Waiting for pod pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53 to disappear Mar 8 16:11:38.433: INFO: Pod pod-configmaps-54ac63a5-2fad-47fd-a030-a847339d0a53 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:11:38.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6142" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":241,"skipped":3918,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:11:38.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5f73a5ee-756e-4f36-a805-3cd4b7bb71fa STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5f73a5ee-756e-4f36-a805-3cd4b7bb71fa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:15.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7415" for this suite. • [SLOW TEST:96.590 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3923,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:15.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:13:15.205: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8d1c8584-fcb4-429b-8861-25076bd382dd", Controller:(*bool)(0xc004e4fafa), BlockOwnerDeletion:(*bool)(0xc004e4fafb)}} Mar 8 16:13:15.250: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"346832ea-4836-411f-ba11-d9da81ee9ef5", Controller:(*bool)(0xc00328892e), BlockOwnerDeletion:(*bool)(0xc00328892f)}} Mar 8 16:13:15.268: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6d24ad1e-32df-4ece-932b-abefd958e762", Controller:(*bool)(0xc004e4fca6), BlockOwnerDeletion:(*bool)(0xc004e4fca7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:20.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3760" for this suite. • [SLOW TEST:5.282 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":243,"skipped":3931,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:20.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 8 16:13:20.381: INFO: namespace kubectl-8216 Mar 8 16:13:20.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8216' Mar 8 16:13:20.680: INFO: stderr: "" Mar 8 16:13:20.680: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 16:13:21.685: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:13:21.685: INFO: Found 0 / 1 Mar 8 16:13:22.683: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:13:22.683: INFO: Found 1 / 1 Mar 8 16:13:22.683: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 16:13:22.686: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:13:22.686: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 16:13:22.686: INFO: wait on agnhost-master startup in kubectl-8216 Mar 8 16:13:22.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-mv5cz agnhost-master --namespace=kubectl-8216' Mar 8 16:13:22.780: INFO: stderr: "" Mar 8 16:13:22.780: INFO: stdout: "Paused\n" STEP: exposing RC Mar 8 16:13:22.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8216' Mar 8 16:13:22.885: INFO: stderr: "" Mar 8 16:13:22.885: INFO: stdout: "service/rm2 exposed\n" Mar 8 16:13:22.890: INFO: Service rm2 in namespace kubectl-8216 found. STEP: exposing service Mar 8 16:13:24.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8216' Mar 8 16:13:25.055: INFO: stderr: "" Mar 8 16:13:25.055: INFO: stdout: "service/rm3 exposed\n" Mar 8 16:13:25.083: INFO: Service rm3 in namespace kubectl-8216 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:27.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8216" for this suite. • [SLOW TEST:6.782 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":244,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:27.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-5b3c0fc0-86f9-403c-871a-8a87c99c1f26 STEP: Creating a pod to test consume configMaps Mar 8 16:13:27.216: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc" in namespace "projected-5700" to be "success or failure" Mar 8 16:13:27.220: INFO: Pod "pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093577ms Mar 8 16:13:29.223: INFO: Pod "pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006875513s STEP: Saw pod success Mar 8 16:13:29.223: INFO: Pod "pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc" satisfied condition "success or failure" Mar 8 16:13:29.224: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:13:29.260: INFO: Waiting for pod pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc to disappear Mar 8 16:13:29.268: INFO: Pod pod-projected-configmaps-e9f1f545-545a-45bc-bac6-2ce82fd2c0cc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:29.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5700" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":3986,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:29.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 16:13:29.341: INFO: Waiting up to 5m0s for pod "pod-f2e3e4ae-8005-4191-a014-16672c67560f" in namespace "emptydir-3672" to be "success or failure" Mar 8 16:13:29.346: INFO: Pod "pod-f2e3e4ae-8005-4191-a014-16672c67560f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.91453ms Mar 8 16:13:31.350: INFO: Pod "pod-f2e3e4ae-8005-4191-a014-16672c67560f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008511945s STEP: Saw pod success Mar 8 16:13:31.350: INFO: Pod "pod-f2e3e4ae-8005-4191-a014-16672c67560f" satisfied condition "success or failure" Mar 8 16:13:31.352: INFO: Trying to get logs from node latest-worker pod pod-f2e3e4ae-8005-4191-a014-16672c67560f container test-container: STEP: delete the pod Mar 8 16:13:31.371: INFO: Waiting for pod pod-f2e3e4ae-8005-4191-a014-16672c67560f to disappear Mar 8 16:13:31.392: INFO: Pod pod-f2e3e4ae-8005-4191-a014-16672c67560f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:31.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3672" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":246,"skipped":4004,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:31.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 16:13:31.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c" in namespace "projected-9375" to be "success or failure" Mar 8 16:13:31.502: INFO: Pod "downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825164ms Mar 8 16:13:33.505: INFO: Pod "downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007979508s STEP: Saw pod success Mar 8 16:13:33.505: INFO: Pod "downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c" satisfied condition "success or failure" Mar 8 16:13:33.508: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c container client-container: STEP: delete the pod Mar 8 16:13:33.561: INFO: Waiting for pod downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c to disappear Mar 8 16:13:33.566: INFO: Pod downwardapi-volume-d28e3718-f926-40ce-89f9-fc7f0e92859c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:33.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9375" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":247,"skipped":4012,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:33.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wdwj8 in namespace proxy-445 I0308 16:13:33.645935 7 runners.go:189] Created replication controller with name: proxy-service-wdwj8, namespace: proxy-445, replica count: 1 I0308 16:13:34.696520 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 16:13:35.696746 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 16:13:36.696983 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:37.697242 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:38.697412 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:39.697622 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:40.697787 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:41.698017 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:42.698277 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:43.698478 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:44.698638 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 16:13:45.698830 7 runners.go:189] proxy-service-wdwj8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 16:13:45.701: INFO: setup took 12.091870864s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 8 16:13:45.707: INFO: (0) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 5.960967ms) Mar 8 16:13:45.716: INFO: (0) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 14.866131ms) Mar 8 16:13:45.716: INFO: (0) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 15.051829ms) Mar 8 16:13:45.714: INFO: (0) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 12.910596ms) Mar 8 16:13:45.716: INFO: (0) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 15.24621ms) Mar 8 16:13:45.716: INFO: (0) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testt... (200; 5.34146ms) Mar 8 16:13:45.725: INFO: (1) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 5.459836ms) Mar 8 16:13:45.725: INFO: (1) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 5.574733ms) Mar 8 16:13:45.725: INFO: (1) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testtest (200; 5.858888ms) Mar 8 16:13:45.726: INFO: (1) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 6.769435ms) Mar 8 16:13:45.726: INFO: (1) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 6.877369ms) Mar 8 16:13:45.726: INFO: (1) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 6.750642ms) Mar 8 16:13:45.726: INFO: (1) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 6.830859ms) Mar 8 16:13:45.728: INFO: (2) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 1.848672ms) Mar 8 16:13:45.730: INFO: (2) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.493968ms) Mar 8 16:13:45.730: INFO: (2) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testt... (200; 4.156655ms) Mar 8 16:13:45.731: INFO: (2) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 4.171094ms) Mar 8 16:13:45.731: INFO: (2) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 4.273929ms) Mar 8 16:13:45.731: INFO: (2) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 4.36918ms) Mar 8 16:13:45.731: INFO: (2) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 5.029678ms) Mar 8 16:13:45.732: INFO: (2) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 5.124979ms) Mar 8 16:13:45.732: INFO: (2) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 5.128568ms) Mar 8 16:13:45.732: INFO: (2) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 5.123262ms) Mar 8 16:13:45.732: INFO: (2) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 5.218856ms) Mar 8 16:13:45.733: INFO: (3) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 1.561486ms) Mar 8 16:13:45.735: INFO: (3) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 2.952742ms) Mar 8 16:13:45.735: INFO: (3) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.277844ms) Mar 8 16:13:45.735: INFO: (3) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testt... (200; 3.680564ms) Mar 8 16:13:45.735: INFO: (3) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.749494ms) Mar 8 16:13:45.736: INFO: (3) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 3.751167ms) Mar 8 16:13:45.736: INFO: (3) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testt... (200; 3.622701ms) Mar 8 16:13:45.740: INFO: (4) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: test (200; 3.657463ms) Mar 8 16:13:45.741: INFO: (4) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 3.659309ms) Mar 8 16:13:45.741: INFO: (4) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 3.742984ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 5.111496ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 5.11431ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 5.160284ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 5.156168ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 5.259711ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 5.409536ms) Mar 8 16:13:45.742: INFO: (4) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 5.461172ms) Mar 8 16:13:45.744: INFO: (5) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 2.173982ms) Mar 8 16:13:45.745: INFO: (5) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.252447ms) Mar 8 16:13:45.745: INFO: (5) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 2.308301ms) Mar 8 16:13:45.745: INFO: (5) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.025927ms) Mar 8 16:13:45.745: INFO: (5) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 3.0099ms) Mar 8 16:13:45.745: INFO: (5) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 3.073544ms) Mar 8 16:13:45.746: INFO: (5) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 3.988023ms) Mar 8 16:13:45.747: INFO: (5) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 4.307451ms) Mar 8 16:13:45.747: INFO: (5) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 4.373997ms) Mar 8 16:13:45.747: INFO: (5) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 4.314552ms) Mar 8 16:13:45.747: INFO: (5) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testtestt... (200; 5.420566ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 7.573661ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 7.73257ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 7.76432ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 7.870556ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 7.805715ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 7.919568ms) Mar 8 16:13:45.755: INFO: (6) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testt... (200; 2.930038ms) Mar 8 16:13:45.758: INFO: (7) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.958601ms) Mar 8 16:13:45.758: INFO: (7) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 3.035195ms) Mar 8 16:13:45.758: INFO: (7) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.234948ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 4.138332ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 4.23443ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 4.249407ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 4.296219ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 4.373955ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 4.430631ms) Mar 8 16:13:45.759: INFO: (7) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 4.455239ms) Mar 8 16:13:45.762: INFO: (8) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.517586ms) Mar 8 16:13:45.762: INFO: (8) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 2.457074ms) Mar 8 16:13:45.762: INFO: (8) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 2.54135ms) Mar 8 16:13:45.762: INFO: (8) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 2.673138ms) Mar 8 16:13:45.762: INFO: (8) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.72405ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.741732ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.747196ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testtest (200; 3.8136ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 3.917776ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 3.904639ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 3.901351ms) Mar 8 16:13:45.763: INFO: (8) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testtest (200; 3.550821ms) Mar 8 16:13:45.768: INFO: (9) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 3.574934ms) Mar 8 16:13:45.768: INFO: (9) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.775079ms) Mar 8 16:13:45.768: INFO: (9) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: t... (200; 4.468567ms) Mar 8 16:13:45.769: INFO: (9) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 4.722514ms) Mar 8 16:13:45.769: INFO: (9) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 4.704909ms) Mar 8 16:13:45.769: INFO: (9) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 4.72447ms) Mar 8 16:13:45.769: INFO: (9) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 4.748211ms) Mar 8 16:13:45.769: INFO: (9) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 4.78475ms) Mar 8 16:13:45.771: INFO: (10) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 2.469887ms) Mar 8 16:13:45.772: INFO: (10) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testtest (200; 5.349691ms) Mar 8 16:13:45.777: INFO: (11) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 2.943214ms) Mar 8 16:13:45.777: INFO: (11) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.135816ms) Mar 8 16:13:45.777: INFO: (11) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 3.117517ms) Mar 8 16:13:45.777: INFO: (11) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 3.148282ms) Mar 8 16:13:45.777: INFO: (11) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: test (200; 3.196434ms) Mar 8 16:13:45.778: INFO: (11) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.371879ms) Mar 8 16:13:45.778: INFO: (11) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testtesttest (200; 2.516803ms) Mar 8 16:13:45.782: INFO: (12) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 3.233035ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 3.128075ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 3.914203ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 3.539218ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 3.591363ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 4.228159ms) Mar 8 16:13:45.783: INFO: (12) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 3.579312ms) Mar 8 16:13:45.785: INFO: (13) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 1.793928ms) Mar 8 16:13:45.785: INFO: (13) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 1.910927ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 4.310504ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 4.378395ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 4.393684ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 4.408447ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testt... (200; 4.426955ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 4.595809ms) Mar 8 16:13:45.788: INFO: (13) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname2/proxy/: bar (200; 4.950372ms) Mar 8 16:13:45.789: INFO: (13) /api/v1/namespaces/proxy-445/services/http:proxy-service-wdwj8:portname1/proxy/: foo (200; 5.172524ms) Mar 8 16:13:45.789: INFO: (13) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 5.21469ms) Mar 8 16:13:45.789: INFO: (13) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 5.246038ms) Mar 8 16:13:45.789: INFO: (13) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 5.229831ms) Mar 8 16:13:45.789: INFO: (13) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 5.406532ms) Mar 8 16:13:45.791: INFO: (14) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.136625ms) Mar 8 16:13:45.792: INFO: (14) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 2.865504ms) Mar 8 16:13:45.792: INFO: (14) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 3.423223ms) Mar 8 16:13:45.792: INFO: (14) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.382616ms) Mar 8 16:13:45.792: INFO: (14) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 3.447108ms) Mar 8 16:13:45.792: INFO: (14) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testt... (200; 3.218401ms) Mar 8 16:13:45.797: INFO: (15) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testtest (200; 3.281393ms) Mar 8 16:13:45.798: INFO: (15) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname2/proxy/: bar (200; 3.596828ms) Mar 8 16:13:45.798: INFO: (15) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 3.840923ms) Mar 8 16:13:45.798: INFO: (15) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 3.829545ms) Mar 8 16:13:45.798: INFO: (15) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 3.862373ms) Mar 8 16:13:45.806: INFO: (16) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 8.065686ms) Mar 8 16:13:45.806: INFO: (16) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 8.155035ms) Mar 8 16:13:45.807: INFO: (16) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 8.786568ms) Mar 8 16:13:45.807: INFO: (16) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:1080/proxy/: t... (200; 8.837487ms) Mar 8 16:13:45.807: INFO: (16) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 8.942965ms) Mar 8 16:13:45.807: INFO: (16) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testt... (200; 2.053309ms) Mar 8 16:13:45.810: INFO: (17) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs/proxy/: test (200; 2.287278ms) Mar 8 16:13:45.812: INFO: (17) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 4.542906ms) Mar 8 16:13:45.812: INFO: (17) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testtest (200; 1.835789ms) Mar 8 16:13:45.815: INFO: (18) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 2.377415ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 2.859518ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 3.137041ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:460/proxy/: tls baz (200; 3.129975ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:1080/proxy/: testt... (200; 3.32783ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.353818ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 3.369913ms) Mar 8 16:13:45.816: INFO: (18) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: testtest (200; 5.158723ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/pods/proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 5.172075ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname1/proxy/: tls baz (200; 5.188363ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:162/proxy/: bar (200; 5.255085ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/pods/http:proxy-service-wdwj8-7zsxs:160/proxy/: foo (200; 5.258268ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/services/proxy-service-wdwj8:portname1/proxy/: foo (200; 5.214557ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:443/proxy/: t... (200; 5.237125ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/services/https:proxy-service-wdwj8:tlsportname2/proxy/: tls qux (200; 5.208734ms) Mar 8 16:13:45.822: INFO: (19) /api/v1/namespaces/proxy-445/pods/https:proxy-service-wdwj8-7zsxs:462/proxy/: tls qux (200; 5.351212ms) STEP: deleting ReplicationController proxy-service-wdwj8 in namespace proxy-445, will wait for the garbage collector to delete the pods Mar 8 16:13:45.878: INFO: Deleting ReplicationController proxy-service-wdwj8 took: 4.349841ms Mar 8 16:13:46.179: INFO: Terminating ReplicationController proxy-service-wdwj8 pods took: 300.234147ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:13:52.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-445" for this suite. • [SLOW TEST:18.915 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":248,"skipped":4018,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:13:52.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 16:13:52.577: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30375 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:13:52.577: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30376 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:13:52.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30377 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 16:14:02.632: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30424 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:14:02.632: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30425 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:14:02.632: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9715 /api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-label-changed 9ae0710e-3e65-4d50-ac2f-87c2bcaa7030 30426 0 2020-03-08 16:13:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:14:02.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9715" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":249,"skipped":4026,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:14:02.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:14:18.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6547" for this suite. • [SLOW TEST:16.101 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":250,"skipped":4029,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:14:18.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:14:18.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7574' Mar 8 16:14:19.075: INFO: stderr: "" Mar 8 16:14:19.075: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 8 16:14:19.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7574' Mar 8 16:14:19.355: INFO: stderr: "" Mar 8 16:14:19.355: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 16:14:21.481: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:14:21.481: INFO: Found 0 / 1 Mar 8 16:14:22.359: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:14:22.359: INFO: Found 0 / 1 Mar 8 16:14:23.358: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:14:23.358: INFO: Found 1 / 1 Mar 8 16:14:23.358: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 16:14:23.361: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 16:14:23.361: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 16:14:23.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-86m6w --namespace=kubectl-7574' Mar 8 16:14:23.518: INFO: stderr: "" Mar 8 16:14:23.518: INFO: stdout: "Name: agnhost-master-86m6w\nNamespace: kubectl-7574\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Sun, 08 Mar 2020 16:14:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.28\nIPs:\n IP: 10.244.1.28\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://c8aff443b4198512c4145ed051afcf3f20fea417854083db8720fd7e4aca2986\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 16:14:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-65s8n (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-65s8n:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-65s8n\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7574/agnhost-master-86m6w to latest-worker\n Normal Pulled 4s kubelet, latest-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 3s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" Mar 8 16:14:23.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7574' Mar 8 16:14:23.653: INFO: stderr: "" Mar 8 16:14:23.653: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7574\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-86m6w\n" Mar 8 16:14:23.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7574' Mar 8 16:14:23.760: INFO: stderr: "" Mar 8 16:14:23.760: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7574\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.222.115\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.28:6379\nSession Affinity: None\nEvents: \n" Mar 8 16:14:23.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 8 16:14:23.900: INFO: stderr: "" Mar 8 16:14:23.900: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 08 Mar 2020 16:14:21 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 16:10:18 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 16:10:18 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 16:10:18 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 16:10:18 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 84m\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 84m\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 84m\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 84m\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 8 16:14:23.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-7574' Mar 8 16:14:23.995: INFO: stderr: "" Mar 8 16:14:23.995: INFO: stdout: "Name: kubectl-7574\nLabels: e2e-framework=kubectl\n e2e-run=2ddc46cb-3b0d-4dae-be69-a78d2ea54b5c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:14:23.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7574" for this suite. • [SLOW TEST:5.253 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":280,"completed":251,"skipped":4046,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:14:24.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-3b280b69-f811-4db6-a1de-ba4a0f064462 in namespace container-probe-9302 Mar 8 16:14:26.051: INFO: Started pod busybox-3b280b69-f811-4db6-a1de-ba4a0f064462 in namespace container-probe-9302 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 16:14:26.054: INFO: Initial restart count of pod busybox-3b280b69-f811-4db6-a1de-ba4a0f064462 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:26.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9302" for this suite. • [SLOW TEST:242.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4057,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:26.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-bb8debd7-4349-44ce-9787-3f1df5408e9d STEP: Creating a pod to test consume configMaps Mar 8 16:18:27.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9" in namespace "projected-9653" to be "success or failure" Mar 8 16:18:27.020: INFO: Pod "pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377672ms Mar 8 16:18:29.024: INFO: Pod "pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006905431s STEP: Saw pod success Mar 8 16:18:29.024: INFO: Pod "pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9" satisfied condition "success or failure" Mar 8 16:18:29.027: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9 container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:18:29.073: INFO: Waiting for pod pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9 to disappear Mar 8 16:18:29.077: INFO: Pod pod-projected-configmaps-19c96bdc-6b31-425e-ad0e-4ccf3440e7f9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:29.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9653" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4063,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:29.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 16:18:29.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5066' Mar 8 16:18:32.057: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 16:18:32.057: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Mar 8 16:18:32.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5066' Mar 8 16:18:32.229: INFO: stderr: "" Mar 8 16:18:32.230: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:32.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5066" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":254,"skipped":4069,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:32.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-726 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-726 I0308 16:18:32.372803 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-726, replica count: 2 I0308 16:18:35.423218 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 16:18:35.423: INFO: Creating new exec pod Mar 8 16:18:38.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-726 execpod6m2hg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 16:18:38.656: INFO: stderr: "I0308 16:18:38.565033 3390 log.go:172] (0xc000a52f20) (0xc000b16500) Create stream\nI0308 16:18:38.565073 3390 log.go:172] (0xc000a52f20) (0xc000b16500) Stream added, broadcasting: 1\nI0308 16:18:38.567513 3390 log.go:172] (0xc000a52f20) Reply frame received for 1\nI0308 16:18:38.567553 3390 log.go:172] (0xc000a52f20) (0xc000b580a0) Create stream\nI0308 16:18:38.567568 3390 log.go:172] (0xc000a52f20) (0xc000b580a0) Stream added, broadcasting: 3\nI0308 16:18:38.568254 3390 log.go:172] (0xc000a52f20) Reply frame received for 3\nI0308 16:18:38.568280 3390 log.go:172] (0xc000a52f20) (0xc000a1c000) Create stream\nI0308 16:18:38.568289 3390 log.go:172] (0xc000a52f20) (0xc000a1c000) Stream added, broadcasting: 5\nI0308 16:18:38.568856 3390 log.go:172] (0xc000a52f20) Reply frame received for 5\nI0308 16:18:38.649794 3390 log.go:172] (0xc000a52f20) Data frame received for 5\nI0308 16:18:38.649819 3390 log.go:172] (0xc000a1c000) (5) Data frame handling\nI0308 16:18:38.649837 3390 log.go:172] (0xc000a1c000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 16:18:38.650736 3390 log.go:172] (0xc000a52f20) Data frame received for 5\nI0308 16:18:38.650758 3390 log.go:172] (0xc000a1c000) (5) Data frame handling\nI0308 16:18:38.650776 3390 log.go:172] (0xc000a1c000) (5) Data frame sent\nI0308 16:18:38.650786 3390 log.go:172] (0xc000a52f20) Data frame received for 5\nI0308 16:18:38.650793 3390 log.go:172] (0xc000a1c000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 16:18:38.650951 3390 log.go:172] (0xc000a52f20) Data frame received for 3\nI0308 16:18:38.650971 3390 log.go:172] (0xc000b580a0) (3) Data frame handling\nI0308 16:18:38.652668 3390 log.go:172] (0xc000a52f20) Data frame received for 1\nI0308 16:18:38.652682 3390 log.go:172] (0xc000b16500) (1) Data frame handling\nI0308 16:18:38.652691 3390 log.go:172] (0xc000b16500) (1) Data frame sent\nI0308 16:18:38.652701 3390 log.go:172] (0xc000a52f20) (0xc000b16500) Stream removed, broadcasting: 1\nI0308 16:18:38.652833 3390 log.go:172] (0xc000a52f20) Go away received\nI0308 16:18:38.653013 3390 log.go:172] (0xc000a52f20) (0xc000b16500) Stream removed, broadcasting: 1\nI0308 16:18:38.653029 3390 log.go:172] (0xc000a52f20) (0xc000b580a0) Stream removed, broadcasting: 3\nI0308 16:18:38.653037 3390 log.go:172] (0xc000a52f20) (0xc000a1c000) Stream removed, broadcasting: 5\n" Mar 8 16:18:38.656: INFO: stdout: "" Mar 8 16:18:38.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-726 execpod6m2hg -- /bin/sh -x -c nc -zv -t -w 2 10.96.132.57 80' Mar 8 16:18:38.874: INFO: stderr: "I0308 16:18:38.806592 3409 log.go:172] (0xc00002c6e0) (0xc0005d6820) Create stream\nI0308 16:18:38.806642 3409 log.go:172] (0xc00002c6e0) (0xc0005d6820) Stream added, broadcasting: 1\nI0308 16:18:38.809538 3409 log.go:172] (0xc00002c6e0) Reply frame received for 1\nI0308 16:18:38.809580 3409 log.go:172] (0xc00002c6e0) (0xc0006cfc20) Create stream\nI0308 16:18:38.809597 3409 log.go:172] (0xc00002c6e0) (0xc0006cfc20) Stream added, broadcasting: 3\nI0308 16:18:38.810544 3409 log.go:172] (0xc00002c6e0) Reply frame received for 3\nI0308 16:18:38.810593 3409 log.go:172] (0xc00002c6e0) (0xc0006cfe00) Create stream\nI0308 16:18:38.810611 3409 log.go:172] (0xc00002c6e0) (0xc0006cfe00) Stream added, broadcasting: 5\nI0308 16:18:38.811512 3409 log.go:172] (0xc00002c6e0) Reply frame received for 5\nI0308 16:18:38.868413 3409 log.go:172] (0xc00002c6e0) Data frame received for 3\nI0308 16:18:38.868447 3409 log.go:172] (0xc0006cfc20) (3) Data frame handling\nI0308 16:18:38.868794 3409 log.go:172] (0xc00002c6e0) Data frame received for 5\nI0308 16:18:38.868817 3409 log.go:172] (0xc0006cfe00) (5) Data frame handling\nI0308 16:18:38.868842 3409 log.go:172] (0xc0006cfe00) (5) Data frame sent\nI0308 16:18:38.868855 3409 log.go:172] (0xc00002c6e0) Data frame received for 5\nI0308 16:18:38.868865 3409 log.go:172] (0xc0006cfe00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.132.57 80\nConnection to 10.96.132.57 80 port [tcp/http] succeeded!\nI0308 16:18:38.870390 3409 log.go:172] (0xc00002c6e0) Data frame received for 1\nI0308 16:18:38.870427 3409 log.go:172] (0xc0005d6820) (1) Data frame handling\nI0308 16:18:38.870455 3409 log.go:172] (0xc0005d6820) (1) Data frame sent\nI0308 16:18:38.870474 3409 log.go:172] (0xc00002c6e0) (0xc0005d6820) Stream removed, broadcasting: 1\nI0308 16:18:38.870489 3409 log.go:172] (0xc00002c6e0) Go away received\nI0308 16:18:38.870850 3409 log.go:172] (0xc00002c6e0) (0xc0005d6820) Stream removed, broadcasting: 1\nI0308 16:18:38.870868 3409 log.go:172] (0xc00002c6e0) (0xc0006cfc20) Stream removed, broadcasting: 3\nI0308 16:18:38.870878 3409 log.go:172] (0xc00002c6e0) (0xc0006cfe00) Stream removed, broadcasting: 5\n" Mar 8 16:18:38.874: INFO: stdout: "" Mar 8 16:18:38.874: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-726" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.674 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":255,"skipped":4078,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:38.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 8 16:18:41.528: INFO: Successfully updated pod "labelsupdate49a0dae4-a5be-4fd1-9fec-3e64772a5350" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:43.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9364" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":256,"skipped":4095,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:43.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-1342 STEP: creating replication controller nodeport-test in namespace services-1342 I0308 16:18:43.710520 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1342, replica count: 2 I0308 16:18:46.761091 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 16:18:46.761: INFO: Creating new exec pod Mar 8 16:18:49.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodrbg8v -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 8 16:18:50.208: INFO: stderr: "I0308 16:18:50.152795 3429 log.go:172] (0xc00003a0b0) (0xc000914000) Create stream\nI0308 16:18:50.152850 3429 log.go:172] (0xc00003a0b0) (0xc000914000) Stream added, broadcasting: 1\nI0308 16:18:50.155115 3429 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0308 16:18:50.155147 3429 log.go:172] (0xc00003a0b0) (0xc000569360) Create stream\nI0308 16:18:50.155155 3429 log.go:172] (0xc00003a0b0) (0xc000569360) Stream added, broadcasting: 3\nI0308 16:18:50.155910 3429 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0308 16:18:50.155931 3429 log.go:172] (0xc00003a0b0) (0xc0006f3a40) Create stream\nI0308 16:18:50.155937 3429 log.go:172] (0xc00003a0b0) (0xc0006f3a40) Stream added, broadcasting: 5\nI0308 16:18:50.156691 3429 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0308 16:18:50.201549 3429 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 16:18:50.201634 3429 log.go:172] (0xc0006f3a40) (5) Data frame handling\nI0308 16:18:50.201659 3429 log.go:172] (0xc0006f3a40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0308 16:18:50.202484 3429 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 16:18:50.202517 3429 log.go:172] (0xc0006f3a40) (5) Data frame handling\nI0308 16:18:50.202543 3429 log.go:172] (0xc0006f3a40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0308 16:18:50.203303 3429 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 16:18:50.203324 3429 log.go:172] (0xc0006f3a40) (5) Data frame handling\nI0308 16:18:50.203525 3429 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0308 16:18:50.203534 3429 log.go:172] (0xc000569360) (3) Data frame handling\nI0308 16:18:50.204330 3429 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0308 16:18:50.204346 3429 log.go:172] (0xc000914000) (1) Data frame handling\nI0308 16:18:50.204362 3429 log.go:172] (0xc000914000) (1) Data frame sent\nI0308 16:18:50.204691 3429 log.go:172] (0xc00003a0b0) (0xc000914000) Stream removed, broadcasting: 1\nI0308 16:18:50.204720 3429 log.go:172] (0xc00003a0b0) Go away received\nI0308 16:18:50.204937 3429 log.go:172] (0xc00003a0b0) (0xc000914000) Stream removed, broadcasting: 1\nI0308 16:18:50.204950 3429 log.go:172] (0xc00003a0b0) (0xc000569360) Stream removed, broadcasting: 3\nI0308 16:18:50.204955 3429 log.go:172] (0xc00003a0b0) (0xc0006f3a40) Stream removed, broadcasting: 5\n" Mar 8 16:18:50.208: INFO: stdout: "" Mar 8 16:18:50.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodrbg8v -- /bin/sh -x -c nc -zv -t -w 2 10.96.28.14 80' Mar 8 16:18:50.379: INFO: stderr: "I0308 16:18:50.316065 3448 log.go:172] (0xc0008f91e0) (0xc0008d0640) Create stream\nI0308 16:18:50.316118 3448 log.go:172] (0xc0008f91e0) (0xc0008d0640) Stream added, broadcasting: 1\nI0308 16:18:50.320678 3448 log.go:172] (0xc0008f91e0) Reply frame received for 1\nI0308 16:18:50.320715 3448 log.go:172] (0xc0008f91e0) (0xc000547180) Create stream\nI0308 16:18:50.320728 3448 log.go:172] (0xc0008f91e0) (0xc000547180) Stream added, broadcasting: 3\nI0308 16:18:50.323838 3448 log.go:172] (0xc0008f91e0) Reply frame received for 3\nI0308 16:18:50.323866 3448 log.go:172] (0xc0008f91e0) (0xc000826000) Create stream\nI0308 16:18:50.323874 3448 log.go:172] (0xc0008f91e0) (0xc000826000) Stream added, broadcasting: 5\nI0308 16:18:50.324584 3448 log.go:172] (0xc0008f91e0) Reply frame received for 5\nI0308 16:18:50.375398 3448 log.go:172] (0xc0008f91e0) Data frame received for 5\nI0308 16:18:50.375423 3448 log.go:172] (0xc000826000) (5) Data frame handling\nI0308 16:18:50.375432 3448 log.go:172] (0xc000826000) (5) Data frame sent\nI0308 16:18:50.375438 3448 log.go:172] (0xc0008f91e0) Data frame received for 5\nI0308 16:18:50.375443 3448 log.go:172] (0xc000826000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.28.14 80\nConnection to 10.96.28.14 80 port [tcp/http] succeeded!\nI0308 16:18:50.375458 3448 log.go:172] (0xc0008f91e0) Data frame received for 3\nI0308 16:18:50.375464 3448 log.go:172] (0xc000547180) (3) Data frame handling\nI0308 16:18:50.376400 3448 log.go:172] (0xc0008f91e0) Data frame received for 1\nI0308 16:18:50.376412 3448 log.go:172] (0xc0008d0640) (1) Data frame handling\nI0308 16:18:50.376419 3448 log.go:172] (0xc0008d0640) (1) Data frame sent\nI0308 16:18:50.376426 3448 log.go:172] (0xc0008f91e0) (0xc0008d0640) Stream removed, broadcasting: 1\nI0308 16:18:50.376433 3448 log.go:172] (0xc0008f91e0) Go away received\nI0308 16:18:50.376732 3448 log.go:172] (0xc0008f91e0) (0xc0008d0640) Stream removed, broadcasting: 1\nI0308 16:18:50.376746 3448 log.go:172] (0xc0008f91e0) (0xc000547180) Stream removed, broadcasting: 3\nI0308 16:18:50.376753 3448 log.go:172] (0xc0008f91e0) (0xc000826000) Stream removed, broadcasting: 5\n" Mar 8 16:18:50.379: INFO: stdout: "" Mar 8 16:18:50.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodrbg8v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31144' Mar 8 16:18:50.533: INFO: stderr: "I0308 16:18:50.471389 3466 log.go:172] (0xc000b54dc0) (0xc0008a34a0) Create stream\nI0308 16:18:50.471426 3466 log.go:172] (0xc000b54dc0) (0xc0008a34a0) Stream added, broadcasting: 1\nI0308 16:18:50.473079 3466 log.go:172] (0xc000b54dc0) Reply frame received for 1\nI0308 16:18:50.473107 3466 log.go:172] (0xc000b54dc0) (0xc0002934a0) Create stream\nI0308 16:18:50.473116 3466 log.go:172] (0xc000b54dc0) (0xc0002934a0) Stream added, broadcasting: 3\nI0308 16:18:50.473680 3466 log.go:172] (0xc000b54dc0) Reply frame received for 3\nI0308 16:18:50.473697 3466 log.go:172] (0xc000b54dc0) (0xc0008a3540) Create stream\nI0308 16:18:50.473702 3466 log.go:172] (0xc000b54dc0) (0xc0008a3540) Stream added, broadcasting: 5\nI0308 16:18:50.474211 3466 log.go:172] (0xc000b54dc0) Reply frame received for 5\nI0308 16:18:50.527706 3466 log.go:172] (0xc000b54dc0) Data frame received for 5\nI0308 16:18:50.527732 3466 log.go:172] (0xc0008a3540) (5) Data frame handling\nI0308 16:18:50.527745 3466 log.go:172] (0xc0008a3540) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.16 31144\nConnection to 172.17.0.16 31144 port [tcp/31144] succeeded!\nI0308 16:18:50.527895 3466 log.go:172] (0xc000b54dc0) Data frame received for 3\nI0308 16:18:50.527916 3466 log.go:172] (0xc0002934a0) (3) Data frame handling\nI0308 16:18:50.527929 3466 log.go:172] (0xc000b54dc0) Data frame received for 5\nI0308 16:18:50.527946 3466 log.go:172] (0xc0008a3540) (5) Data frame handling\nI0308 16:18:50.530213 3466 log.go:172] (0xc000b54dc0) Data frame received for 1\nI0308 16:18:50.530240 3466 log.go:172] (0xc0008a34a0) (1) Data frame handling\nI0308 16:18:50.530279 3466 log.go:172] (0xc0008a34a0) (1) Data frame sent\nI0308 16:18:50.530300 3466 log.go:172] (0xc000b54dc0) (0xc0008a34a0) Stream removed, broadcasting: 1\nI0308 16:18:50.530405 3466 log.go:172] (0xc000b54dc0) Go away received\nI0308 16:18:50.530736 3466 log.go:172] (0xc000b54dc0) (0xc0008a34a0) Stream removed, broadcasting: 1\nI0308 16:18:50.530756 3466 log.go:172] (0xc000b54dc0) (0xc0002934a0) Stream removed, broadcasting: 3\nI0308 16:18:50.530781 3466 log.go:172] (0xc000b54dc0) (0xc0008a3540) Stream removed, broadcasting: 5\n" Mar 8 16:18:50.533: INFO: stdout: "" Mar 8 16:18:50.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodrbg8v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31144' Mar 8 16:18:50.684: INFO: stderr: "I0308 16:18:50.631283 3486 log.go:172] (0xc0008dc630) (0xc0009aa000) Create stream\nI0308 16:18:50.631320 3486 log.go:172] (0xc0008dc630) (0xc0009aa000) Stream added, broadcasting: 1\nI0308 16:18:50.633239 3486 log.go:172] (0xc0008dc630) Reply frame received for 1\nI0308 16:18:50.633272 3486 log.go:172] (0xc0008dc630) (0xc0009aa0a0) Create stream\nI0308 16:18:50.633282 3486 log.go:172] (0xc0008dc630) (0xc0009aa0a0) Stream added, broadcasting: 3\nI0308 16:18:50.634070 3486 log.go:172] (0xc0008dc630) Reply frame received for 3\nI0308 16:18:50.634099 3486 log.go:172] (0xc0008dc630) (0xc0009aa140) Create stream\nI0308 16:18:50.634109 3486 log.go:172] (0xc0008dc630) (0xc0009aa140) Stream added, broadcasting: 5\nI0308 16:18:50.634729 3486 log.go:172] (0xc0008dc630) Reply frame received for 5\nI0308 16:18:50.680005 3486 log.go:172] (0xc0008dc630) Data frame received for 5\nI0308 16:18:50.680029 3486 log.go:172] (0xc0009aa140) (5) Data frame handling\nI0308 16:18:50.680052 3486 log.go:172] (0xc0009aa140) (5) Data frame sent\nI0308 16:18:50.680065 3486 log.go:172] (0xc0008dc630) Data frame received for 5\nI0308 16:18:50.680075 3486 log.go:172] (0xc0009aa140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31144\nConnection to 172.17.0.18 31144 port [tcp/31144] succeeded!\nI0308 16:18:50.680314 3486 log.go:172] (0xc0008dc630) Data frame received for 3\nI0308 16:18:50.680343 3486 log.go:172] (0xc0009aa0a0) (3) Data frame handling\nI0308 16:18:50.681532 3486 log.go:172] (0xc0008dc630) Data frame received for 1\nI0308 16:18:50.681551 3486 log.go:172] (0xc0009aa000) (1) Data frame handling\nI0308 16:18:50.681566 3486 log.go:172] (0xc0009aa000) (1) Data frame sent\nI0308 16:18:50.681591 3486 log.go:172] (0xc0008dc630) (0xc0009aa000) Stream removed, broadcasting: 1\nI0308 16:18:50.681612 3486 log.go:172] (0xc0008dc630) Go away received\nI0308 16:18:50.682068 3486 log.go:172] (0xc0008dc630) (0xc0009aa000) Stream removed, broadcasting: 1\nI0308 16:18:50.682089 3486 log.go:172] (0xc0008dc630) (0xc0009aa0a0) Stream removed, broadcasting: 3\nI0308 16:18:50.682098 3486 log.go:172] (0xc0008dc630) (0xc0009aa140) Stream removed, broadcasting: 5\n" Mar 8 16:18:50.684: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:50.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1342" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:7.114 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":257,"skipped":4104,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:50.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-c633fc69-44a1-4552-a40e-8f1a2249eb1b STEP: Creating a pod to test consume secrets Mar 8 16:18:50.813: INFO: Waiting up to 5m0s for pod "pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb" in namespace "secrets-7480" to be "success or failure" Mar 8 16:18:50.817: INFO: Pod "pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.547153ms Mar 8 16:18:52.821: INFO: Pod "pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008235899s Mar 8 16:18:54.825: INFO: Pod "pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012171178s STEP: Saw pod success Mar 8 16:18:54.825: INFO: Pod "pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb" satisfied condition "success or failure" Mar 8 16:18:54.827: INFO: Trying to get logs from node latest-worker pod pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb container secret-volume-test: STEP: delete the pod Mar 8 16:18:54.849: INFO: Waiting for pod pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb to disappear Mar 8 16:18:54.866: INFO: Pod pod-secrets-721c180e-93a7-47a7-8b78-fc7de7eb3bbb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:18:54.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7480" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4116,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:18:54.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:18:59.012: INFO: Waiting up to 5m0s for pod "client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf" in namespace "pods-6988" to be "success or failure" Mar 8 16:18:59.044: INFO: Pod "client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.079157ms Mar 8 16:19:01.047: INFO: Pod "client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035912469s STEP: Saw pod success Mar 8 16:19:01.048: INFO: Pod "client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf" satisfied condition "success or failure" Mar 8 16:19:01.050: INFO: Trying to get logs from node latest-worker pod client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf container env3cont: STEP: delete the pod Mar 8 16:19:01.104: INFO: Waiting for pod client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf to disappear Mar 8 16:19:01.120: INFO: Pod client-envvars-d941537d-5de8-470d-8075-f46c8e9517bf no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:01.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6988" for this suite. • [SLOW TEST:6.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4129,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:01.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 16:19:01.566: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 16:19:04.657: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:04.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1993" for this suite. STEP: Destroying namespace "webhook-1993-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":260,"skipped":4143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:04.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-652" for this suite. • [SLOW TEST:17.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":261,"skipped":4172,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:21.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:19:22.020: INFO: Creating ReplicaSet my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4 Mar 8 16:19:22.031: INFO: Pod name my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4: Found 0 pods out of 1 Mar 8 16:19:27.048: INFO: Pod name my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4: Found 1 pods out of 1 Mar 8 16:19:27.048: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4" is running Mar 8 16:19:27.051: INFO: Pod "my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4-mk6mz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 16:19:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 16:19:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 16:19:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 16:19:22 +0000 UTC Reason: Message:}]) Mar 8 16:19:27.051: INFO: Trying to dial the pod Mar 8 16:19:32.061: INFO: Controller my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4: Got expected result from replica 1 [my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4-mk6mz]: "my-hostname-basic-7a06470d-1e9d-46b2-9695-ba601f4b10e4-mk6mz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:32.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8502" for this suite. • [SLOW TEST:10.094 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":262,"skipped":4173,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:32.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:34.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2479" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4174,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:34.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-208/configmap-test-9dbdb98f-3124-4008-80e6-f9f4574ff192 STEP: Creating a pod to test consume configMaps Mar 8 16:19:34.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee" in namespace "configmap-208" to be "success or failure" Mar 8 16:19:34.389: INFO: Pod "pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113462ms Mar 8 16:19:36.393: INFO: Pod "pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019892077s STEP: Saw pod success Mar 8 16:19:36.393: INFO: Pod "pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee" satisfied condition "success or failure" Mar 8 16:19:36.395: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee container env-test: STEP: delete the pod Mar 8 16:19:36.433: INFO: Waiting for pod pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee to disappear Mar 8 16:19:36.435: INFO: Pod pod-configmaps-3ace408a-9743-4659-b81b-0108cf348bee no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:36.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-208" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4183,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:36.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 16:19:46.638858 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 16:19:46.638: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:46.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1042" for this suite. • [SLOW TEST:10.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":265,"skipped":4199,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:46.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Mar 8 16:19:46.680: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix157176427/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:46.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1149" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":280,"completed":266,"skipped":4206,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:46.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-d2db4420-fbd5-4c59-b8a8-3df1e38a0f79 STEP: Creating a pod to test consume configMaps Mar 8 16:19:46.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30" in namespace "projected-5333" to be "success or failure" Mar 8 16:19:46.810: INFO: Pod "pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451933ms Mar 8 16:19:48.815: INFO: Pod "pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008134627s Mar 8 16:19:50.818: INFO: Pod "pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011654145s STEP: Saw pod success Mar 8 16:19:50.818: INFO: Pod "pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30" satisfied condition "success or failure" Mar 8 16:19:50.821: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30 container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:19:50.836: INFO: Waiting for pod pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30 to disappear Mar 8 16:19:50.855: INFO: Pod pod-projected-configmaps-17e3e99e-4158-4254-98c6-78942d44aa30 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:50.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5333" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":267,"skipped":4209,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:50.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-fe99c533-6e1d-4d5d-a135-e03cef7c5798 STEP: Creating a pod to test consume configMaps Mar 8 16:19:50.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc" in namespace "projected-3358" to be "success or failure" Mar 8 16:19:50.930: INFO: Pod "pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.646871ms Mar 8 16:19:52.934: INFO: Pod "pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007622449s Mar 8 16:19:54.938: INFO: Pod "pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011337822s STEP: Saw pod success Mar 8 16:19:54.938: INFO: Pod "pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc" satisfied condition "success or failure" Mar 8 16:19:54.941: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:19:54.976: INFO: Waiting for pod pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc to disappear Mar 8 16:19:54.994: INFO: Pod pod-projected-configmaps-63dd676b-281e-45d6-9417-157131436fdc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:54.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3358" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":268,"skipped":4217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:55.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 16:19:55.551: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 16:19:58.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:19:58.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5172" for this suite. STEP: Destroying namespace "webhook-5172-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":269,"skipped":4241,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:19:58.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6355.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:20:02.856: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.858: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.861: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.864: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.872: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.874: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.877: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.879: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:02.884: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:07.889: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.892: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.895: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.898: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.908: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.911: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.914: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.916: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:07.923: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:12.888: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.890: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.893: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.895: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.904: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.906: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.910: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.912: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:12.917: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:17.888: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.891: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.894: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.897: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.905: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.907: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.911: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.914: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:17.920: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:22.887: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.889: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.891: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.893: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.899: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.901: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.902: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.904: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:22.908: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:27.888: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.891: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.893: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.895: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.912: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.914: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.916: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local from pod dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548: the server could not find the requested resource (get pods dns-test-3444268a-18bf-4a1c-990f-5bad310d9548) Mar 8 16:20:27.922: INFO: Lookups using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6355.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6355.svc.cluster.local jessie_udp@dns-test-service-2.dns-6355.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6355.svc.cluster.local] Mar 8 16:20:32.922: INFO: DNS probes using dns-6355/dns-test-3444268a-18bf-4a1c-990f-5bad310d9548 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:33.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6355" for this suite. • [SLOW TEST:34.334 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":270,"skipped":4243,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:33.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 16:20:33.155: INFO: Waiting up to 5m0s for pod "pod-ff5271c5-b7bc-4591-bcc6-9672937871f6" in namespace "emptydir-602" to be "success or failure" Mar 8 16:20:33.159: INFO: Pod "pod-ff5271c5-b7bc-4591-bcc6-9672937871f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291988ms Mar 8 16:20:35.165: INFO: Pod "pod-ff5271c5-b7bc-4591-bcc6-9672937871f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010174404s Mar 8 16:20:37.168: INFO: Pod "pod-ff5271c5-b7bc-4591-bcc6-9672937871f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013308922s STEP: Saw pod success Mar 8 16:20:37.168: INFO: Pod "pod-ff5271c5-b7bc-4591-bcc6-9672937871f6" satisfied condition "success or failure" Mar 8 16:20:37.176: INFO: Trying to get logs from node latest-worker pod pod-ff5271c5-b7bc-4591-bcc6-9672937871f6 container test-container: STEP: delete the pod Mar 8 16:20:37.212: INFO: Waiting for pod pod-ff5271c5-b7bc-4591-bcc6-9672937871f6 to disappear Mar 8 16:20:37.219: INFO: Pod pod-ff5271c5-b7bc-4591-bcc6-9672937871f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-602" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4252,"failed":0} ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:37.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:20:37.274: INFO: Creating deployment "webserver-deployment" Mar 8 16:20:37.279: INFO: Waiting for observed generation 1 Mar 8 16:20:39.366: INFO: Waiting for all required pods to come up Mar 8 16:20:39.369: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 16:20:41.377: INFO: Waiting for deployment "webserver-deployment" to complete Mar 8 16:20:41.382: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 8 16:20:41.386: INFO: Updating deployment webserver-deployment Mar 8 16:20:41.386: INFO: Waiting for observed generation 2 Mar 8 16:20:43.408: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 16:20:43.413: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 16:20:43.415: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 16:20:43.419: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 16:20:43.419: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 16:20:43.421: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 16:20:43.424: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 8 16:20:43.424: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 8 16:20:43.427: INFO: Updating deployment webserver-deployment Mar 8 16:20:43.427: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 8 16:20:43.474: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 16:20:43.529: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 16:20:45.591: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5455 /apis/apps/v1/namespaces/deployment-5455/deployments/webserver-deployment 99c37510-4921-4e27-a4e0-295d662c3005 32816 3 2020-03-08 16:20:37 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0004373b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 16:20:43 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-08 16:20:43 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 8 16:20:45.620: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5455 /apis/apps/v1/namespaces/deployment-5455/replicasets/webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 32812 3 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 99c37510-4921-4e27-a4e0-295d662c3005 0xc00212a6f7 0xc00212a6f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00212a808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 16:20:45.620: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 8 16:20:45.620: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5455 /apis/apps/v1/namespaces/deployment-5455/replicasets/webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 32800 3 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 99c37510-4921-4e27-a4e0-295d662c3005 0xc00212a607 0xc00212a608}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00212a698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 8 16:20:45.638: INFO: Pod "webserver-deployment-595b5b9587-2mzhj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2mzhj webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-2mzhj 426b311d-d394-4778-9afd-f022d1791cb1 32634 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00303f597 0xc00303f598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.51,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://87dd8275bf35e14e52244d95413dac29e63507bbfcbe6b555a519a3124e8a927,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.638: INFO: Pod "webserver-deployment-595b5b9587-2qjh6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2qjh6 webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-2qjh6 b555bda8-3a84-4c7e-a604-fa6c8ad9f918 32876 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00303f840 0xc00303f841}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.638: INFO: Pod "webserver-deployment-595b5b9587-5pgdc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5pgdc webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-5pgdc ab28c4b7-13d4-4530-923d-7ca16a349633 32643 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00303fa57 0xc00303fa58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.54,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://54a76ab9962202fb1a2cf9548d3a03fde8a9eb52e094b0635ba7d8b1b7fedf44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.638: INFO: Pod "webserver-deployment-595b5b9587-6mpr6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6mpr6 webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-6mpr6 e0f8d3d8-3407-49f5-a711-81d17a07bae5 32638 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00303fd00 0xc00303fd01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.126,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5cd48b626213dab0fb8ba147d17c2d931ea5daf0cd7b3943a6b5f04430a8e8ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.638: INFO: Pod "webserver-deployment-595b5b9587-7mrxf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7mrxf webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-7mrxf 18a46ad2-5e37-4319-9d69-a10e96fbd225 32818 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00303fee7 0xc00303fee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.639: INFO: Pod "webserver-deployment-595b5b9587-8rzjn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8rzjn webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-8rzjn 39e5ad58-8736-4669-a73e-4f361c6cc178 32809 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c1a7 0xc00309c1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.639: INFO: Pod "webserver-deployment-595b5b9587-hm8rh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hm8rh webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-hm8rh f1b7c42d-fa33-42f8-a838-4fac481e8cfe 32826 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c3a7 0xc00309c3a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.639: INFO: Pod "webserver-deployment-595b5b9587-k7hjz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k7hjz webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-k7hjz 4420518f-f2be-4c99-97de-f2ca2a120bbf 32854 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c597 0xc00309c598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.639: INFO: Pod "webserver-deployment-595b5b9587-lf2zs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lf2zs webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-lf2zs f9af8262-3642-4a53-9351-e99ae3a85ca6 32810 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c707 0xc00309c708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.639: INFO: Pod "webserver-deployment-595b5b9587-n5zbc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n5zbc webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-n5zbc 7ed1ab58-0901-449c-8580-c1c0cec5d85c 32834 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c867 0xc00309c868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-p9xkr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p9xkr webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-p9xkr 115f66bd-efcc-4b19-925c-93b009fbf691 32632 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309c9c7 0xc00309c9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.129,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://901feb9328e3a5390a3fe1b647291620ab4e36f99439055bc15f83b2d94c4a2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-pcpzf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pcpzf webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-pcpzf bed7f7ff-0450-432c-b3a0-9766cea58b44 32649 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309cb47 0xc00309cb48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.52,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://39b0b54ae6b40850e0c14ecd8963d9efbb0aa87daa3d7ca904e55a7173ca1818,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-pjlk7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pjlk7 webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-pjlk7 399919d6-e710-4fdf-9a84-6e7cc89b9ee7 32855 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309ccc0 0xc00309ccc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-px685" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-px685 webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-px685 4b71410f-dc78-439c-82f1-256ca89befcc 32865 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309ce17 0xc00309ce18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-v2qds" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2qds webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-v2qds 57f62c6b-17cc-4429-a1b4-1804f2377e4e 32836 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309cf77 0xc00309cf78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.640: INFO: Pod "webserver-deployment-595b5b9587-v7tlp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v7tlp webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-v7tlp 1b05a36c-02ae-42cc-835b-bdc5c6aa58ae 32646 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309d0d7 0xc00309d0d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.127,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d38179b260293bb99b9a6cf47d0c7d7fe4f6db961856451c57a5f6b5f0388851,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.641: INFO: Pod "webserver-deployment-595b5b9587-w2cjc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2cjc webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-w2cjc 7effc18e-0d99-40a4-bc03-c0e49c2aff40 32883 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309d257 0xc00309d258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.641: INFO: Pod "webserver-deployment-595b5b9587-xmmxl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xmmxl webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-xmmxl cfb28b77-56e7-49ac-882a-21c7be484069 32662 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309d3b7 0xc00309d3b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.55,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dc6182ac46b74a02e81870fb8122f89dbc6f157a118ad5ec385c5feb3a0eeb20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.641: INFO: Pod "webserver-deployment-595b5b9587-xz2mm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xz2mm webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-xz2mm 070ccd22-fbbf-4476-9d00-c64dcd276401 32647 0 2020-03-08 16:20:37 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309d530 0xc00309d531}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.53,StartTime:2020-03-08 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 16:20:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5e5b682e2023ff7a21ceeb7bdffd750a9ef47bc7ef90b2cebe5b2182cc2df81e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.641: INFO: Pod "webserver-deployment-595b5b9587-zpg2l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zpg2l webserver-deployment-595b5b9587- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-595b5b9587-zpg2l e031d332-3b22-492e-ab3d-d901868e73f0 32801 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7eaa4e54-d65b-421a-8408-3a7119ff9c5c 0xc00309d6a0 0xc00309d6a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-2p7vh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2p7vh webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-2p7vh 69d4f3e8-17fd-4946-9547-3becca322992 32803 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309d7f7 0xc00309d7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-786jj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-786jj webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-786jj 737dba59-6532-4e12-8fc0-3d1ca428d4ad 32866 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309d920 0xc00309d921}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-7vdxr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7vdxr webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-7vdxr bea71dcf-2e18-46d2-8db2-ba2c1cd06f9e 32712 0 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309da90 0xc00309da91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-8bdgq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8bdgq webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-8bdgq aad7c18f-2f92-409c-aaa0-9c3084d7bc5c 32797 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309dc10 0xc00309dc11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-bl2v7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bl2v7 webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-bl2v7 e3d92bfa-04a1-4533-8dd9-ca4f616d506b 32841 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309dd30 0xc00309dd31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.642: INFO: Pod "webserver-deployment-c7997dcc8-fv2hd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fv2hd webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-fv2hd c416dcb5-b368-4be4-ae37-dc1b78ccdde9 32708 0 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc00309dea0 0xc00309dea1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.643: INFO: Pod "webserver-deployment-c7997dcc8-fzkkr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fzkkr webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-fzkkr ab41fea4-5c46-4398-9e02-783d0286c297 32819 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4010 0xc002ef4011}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.643: INFO: Pod "webserver-deployment-c7997dcc8-kbqq8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kbqq8 webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-kbqq8 4348c280-4bb2-4f35-9275-8ef5021d2b1a 32882 0 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4180 0xc002ef4181}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.56,StartTime:2020-03-08 16:20:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.643: INFO: Pod "webserver-deployment-c7997dcc8-nctsl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nctsl webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-nctsl b71cd5e9-5d8f-4e7b-a6c9-5a7459ec41c8 32743 0 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4320 0xc002ef4321}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.57,StartTime:2020-03-08 16:20:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.643: INFO: Pod "webserver-deployment-c7997dcc8-v4hsc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v4hsc webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-v4hsc 3cf96746-02fe-40b9-95dd-cd66e78cc798 32845 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4650 0xc002ef4651}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.643: INFO: Pod "webserver-deployment-c7997dcc8-v58w9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v58w9 webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-v58w9 c123d78e-4e5f-4130-a59e-e82a80f6feee 32692 0 2020-03-08 16:20:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef47c0 0xc002ef47c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 16:20:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.644: INFO: Pod "webserver-deployment-c7997dcc8-wwcwv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wwcwv webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-wwcwv 8cd7c290-b556-45a5-abc6-e77177667d64 32875 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4930 0xc002ef4931}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 16:20:45.644: INFO: Pod "webserver-deployment-c7997dcc8-zdsh6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zdsh6 webserver-deployment-c7997dcc8- deployment-5455 /api/v1/namespaces/deployment-5455/pods/webserver-deployment-c7997dcc8-zdsh6 20d88d7d-0586-45ce-a0da-f0fabff7351a 32830 0 2020-03-08 16:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6702a274-97a7-4b7b-9cf4-408377c1af58 0xc002ef4aa0 0xc002ef4aa1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 16:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 16:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:45.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5455" for this suite. • [SLOW TEST:8.425 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":272,"skipped":4252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:45.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 16:20:45.788: INFO: Waiting up to 5m0s for pod "pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752" in namespace "emptydir-7548" to be "success or failure" Mar 8 16:20:45.797: INFO: Pod "pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752": Phase="Pending", Reason="", readiness=false. Elapsed: 9.62363ms Mar 8 16:20:47.801: INFO: Pod "pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013018816s Mar 8 16:20:49.804: INFO: Pod "pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016043679s STEP: Saw pod success Mar 8 16:20:49.804: INFO: Pod "pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752" satisfied condition "success or failure" Mar 8 16:20:49.805: INFO: Trying to get logs from node latest-worker pod pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752 container test-container: STEP: delete the pod Mar 8 16:20:49.848: INFO: Waiting for pod pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752 to disappear Mar 8 16:20:49.875: INFO: Pod pod-83ee30f4-0af5-4b90-8d8f-4e74870d7752 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:49.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7548" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:49.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:20:50.280: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 306.798085ms) Mar 8 16:20:50.283: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.231444ms) Mar 8 16:20:50.286: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.415306ms) Mar 8 16:20:50.288: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.460166ms) Mar 8 16:20:50.291: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.404162ms) Mar 8 16:20:50.294: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.315947ms) Mar 8 16:20:50.297: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.474976ms) Mar 8 16:20:50.299: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.353296ms) Mar 8 16:20:50.301: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.241137ms) Mar 8 16:20:50.304: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.250529ms) Mar 8 16:20:50.306: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.329707ms) Mar 8 16:20:50.308: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.265996ms) Mar 8 16:20:50.311: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.513401ms) Mar 8 16:20:50.347: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 36.067352ms) Mar 8 16:20:50.356: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 9.368807ms) Mar 8 16:20:50.427: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 70.81069ms) Mar 8 16:20:50.430: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.874145ms) Mar 8 16:20:50.433: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.481821ms) Mar 8 16:20:50.435: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.462501ms) Mar 8 16:20:50.437: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.29883ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:50.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4159" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":280,"completed":274,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:50.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Mar 8 16:20:50.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=kubectl-6108 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 8 16:20:54.962: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0308 16:20:54.923738 3524 log.go:172] (0xc000b32370) (0xc0007cc140) Create stream\nI0308 16:20:54.923787 3524 log.go:172] (0xc000b32370) (0xc0007cc140) Stream added, broadcasting: 1\nI0308 16:20:54.925733 3524 log.go:172] (0xc000b32370) Reply frame received for 1\nI0308 16:20:54.925760 3524 log.go:172] (0xc000b32370) (0xc000b1e140) Create stream\nI0308 16:20:54.925768 3524 log.go:172] (0xc000b32370) (0xc000b1e140) Stream added, broadcasting: 3\nI0308 16:20:54.926599 3524 log.go:172] (0xc000b32370) Reply frame received for 3\nI0308 16:20:54.926627 3524 log.go:172] (0xc000b32370) (0xc0006e7b80) Create stream\nI0308 16:20:54.926638 3524 log.go:172] (0xc000b32370) (0xc0006e7b80) Stream added, broadcasting: 5\nI0308 16:20:54.927271 3524 log.go:172] (0xc000b32370) Reply frame received for 5\nI0308 16:20:54.927293 3524 log.go:172] (0xc000b32370) (0xc0007cc1e0) Create stream\nI0308 16:20:54.927300 3524 log.go:172] (0xc000b32370) (0xc0007cc1e0) Stream added, broadcasting: 7\nI0308 16:20:54.927999 3524 log.go:172] (0xc000b32370) Reply frame received for 7\nI0308 16:20:54.928139 3524 log.go:172] (0xc000b1e140) (3) Writing data frame\nI0308 16:20:54.928210 3524 log.go:172] (0xc000b1e140) (3) Writing data frame\nI0308 16:20:54.928888 3524 log.go:172] (0xc000b32370) Data frame received for 5\nI0308 16:20:54.928900 3524 log.go:172] (0xc0006e7b80) (5) Data frame handling\nI0308 16:20:54.928908 3524 log.go:172] (0xc0006e7b80) (5) Data frame sent\nI0308 16:20:54.929376 3524 log.go:172] (0xc000b32370) Data frame received for 5\nI0308 16:20:54.929395 3524 log.go:172] (0xc0006e7b80) (5) Data frame handling\nI0308 16:20:54.929409 3524 log.go:172] (0xc0006e7b80) (5) Data frame sent\nI0308 16:20:54.943610 3524 log.go:172] (0xc000b32370) Data frame received for 5\nI0308 16:20:54.943645 3524 log.go:172] (0xc0006e7b80) (5) Data frame handling\nI0308 16:20:54.943666 3524 log.go:172] (0xc000b32370) Data frame received for 7\nI0308 16:20:54.943679 3524 log.go:172] (0xc0007cc1e0) (7) Data frame handling\nI0308 16:20:54.947263 3524 log.go:172] (0xc000b32370) (0xc000b1e140) Stream removed, broadcasting: 3\nI0308 16:20:54.947313 3524 log.go:172] (0xc000b32370) Data frame received for 1\nI0308 16:20:54.947324 3524 log.go:172] (0xc0007cc140) (1) Data frame handling\nI0308 16:20:54.947336 3524 log.go:172] (0xc0007cc140) (1) Data frame sent\nI0308 16:20:54.947349 3524 log.go:172] (0xc000b32370) (0xc0007cc140) Stream removed, broadcasting: 1\nI0308 16:20:54.947364 3524 log.go:172] (0xc000b32370) Go away received\nI0308 16:20:54.947798 3524 log.go:172] (0xc000b32370) (0xc0007cc140) Stream removed, broadcasting: 1\nI0308 16:20:54.947817 3524 log.go:172] (0xc000b32370) (0xc000b1e140) Stream removed, broadcasting: 3\nI0308 16:20:54.947827 3524 log.go:172] (0xc000b32370) (0xc0006e7b80) Stream removed, broadcasting: 5\nI0308 16:20:54.947837 3524 log.go:172] (0xc000b32370) (0xc0007cc1e0) Stream removed, broadcasting: 7\n" Mar 8 16:20:54.962: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:20:56.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6108" for this suite. • [SLOW TEST:6.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":280,"completed":275,"skipped":4441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:20:56.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:20:57.037: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 8 16:20:57.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:20:57.047: INFO: Number of nodes with available pods: 0 Mar 8 16:20:57.047: INFO: Node latest-worker is running more than one daemon pod Mar 8 16:20:58.061: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:20:58.065: INFO: Number of nodes with available pods: 0 Mar 8 16:20:58.065: INFO: Node latest-worker is running more than one daemon pod Mar 8 16:20:59.051: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:20:59.055: INFO: Number of nodes with available pods: 1 Mar 8 16:20:59.055: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 16:21:00.052: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:00.055: INFO: Number of nodes with available pods: 2 Mar 8 16:21:00.055: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 8 16:21:00.132: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:00.132: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:00.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:01.233: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:01.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:01.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:02.233: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:02.233: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:02.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:02.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:03.234: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:03.234: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:03.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:03.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:04.234: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:04.234: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:04.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:04.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:05.235: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:05.235: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:05.235: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:05.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:06.307: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:06.307: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:06.307: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:06.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:07.235: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:07.235: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:07.235: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:07.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:08.233: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:08.233: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:08.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:08.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:09.234: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:09.234: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:09.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:09.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:10.233: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:10.233: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:10.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:10.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:11.234: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:11.234: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:11.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:11.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:12.234: INFO: Wrong image for pod: daemon-set-qh7ms. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:12.234: INFO: Pod daemon-set-qh7ms is not available Mar 8 16:21:12.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:12.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:13.234: INFO: Pod daemon-set-2n4pg is not available Mar 8 16:21:13.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:13.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:14.233: INFO: Pod daemon-set-2n4pg is not available Mar 8 16:21:14.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:14.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:15.234: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:15.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:16.233: INFO: Wrong image for pod: daemon-set-xhnrw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 16:21:16.233: INFO: Pod daemon-set-xhnrw is not available Mar 8 16:21:16.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:17.234: INFO: Pod daemon-set-rb2kr is not available Mar 8 16:21:17.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 8 16:21:17.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:17.243: INFO: Number of nodes with available pods: 1 Mar 8 16:21:17.243: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 16:21:18.246: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:18.249: INFO: Number of nodes with available pods: 1 Mar 8 16:21:18.249: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 16:21:19.266: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 16:21:19.269: INFO: Number of nodes with available pods: 2 Mar 8 16:21:19.269: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8074, will wait for the garbage collector to delete the pods Mar 8 16:21:19.338: INFO: Deleting DaemonSet.extensions daemon-set took: 4.848694ms Mar 8 16:21:19.438: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.19242ms Mar 8 16:21:32.541: INFO: Number of nodes with available pods: 0 Mar 8 16:21:32.541: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 16:21:32.544: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8074/daemonsets","resourceVersion":"33411"},"items":null} Mar 8 16:21:32.546: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8074/pods","resourceVersion":"33411"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:21:32.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8074" for this suite. • [SLOW TEST:35.586 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":276,"skipped":4504,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:21:32.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:21:32.595: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 8 16:21:34.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 create -f -' Mar 8 16:21:38.720: INFO: stderr: "" Mar 8 16:21:38.720: INFO: stdout: "e2e-test-crd-publish-openapi-6787-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 16:21:38.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 delete e2e-test-crd-publish-openapi-6787-crds test-foo' Mar 8 16:21:38.842: INFO: stderr: "" Mar 8 16:21:38.842: INFO: stdout: "e2e-test-crd-publish-openapi-6787-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 8 16:21:38.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 apply -f -' Mar 8 16:21:39.119: INFO: stderr: "" Mar 8 16:21:39.119: INFO: stdout: "e2e-test-crd-publish-openapi-6787-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 16:21:39.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 delete e2e-test-crd-publish-openapi-6787-crds test-foo' Mar 8 16:21:39.237: INFO: stderr: "" Mar 8 16:21:39.237: INFO: stdout: "e2e-test-crd-publish-openapi-6787-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 8 16:21:39.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 create -f -' Mar 8 16:21:39.506: INFO: rc: 1 Mar 8 16:21:39.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 apply -f -' Mar 8 16:21:39.826: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 8 16:21:39.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 create -f -' Mar 8 16:21:40.130: INFO: rc: 1 Mar 8 16:21:40.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4628 apply -f -' Mar 8 16:21:40.485: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 8 16:21:40.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6787-crds' Mar 8 16:21:40.709: INFO: stderr: "" Mar 8 16:21:40.709: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6787-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 8 16:21:40.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6787-crds.metadata' Mar 8 16:21:40.956: INFO: stderr: "" Mar 8 16:21:40.957: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6787-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 8 16:21:40.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6787-crds.spec' Mar 8 16:21:41.206: INFO: stderr: "" Mar 8 16:21:41.206: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6787-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 8 16:21:41.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6787-crds.spec.bars' Mar 8 16:21:41.470: INFO: stderr: "" Mar 8 16:21:41.470: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6787-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 8 16:21:41.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6787-crds.spec.bars2' Mar 8 16:21:41.703: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:21:44.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4628" for this suite. • [SLOW TEST:11.973 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":277,"skipped":4521,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:21:44.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 16:21:44.574: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 16:21:47.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2174 create -f -' Mar 8 16:21:49.977: INFO: stderr: "" Mar 8 16:21:49.977: INFO: stdout: "e2e-test-crd-publish-openapi-4035-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 16:21:49.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2174 delete e2e-test-crd-publish-openapi-4035-crds test-cr' Mar 8 16:21:50.114: INFO: stderr: "" Mar 8 16:21:50.114: INFO: stdout: "e2e-test-crd-publish-openapi-4035-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 8 16:21:50.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2174 apply -f -' Mar 8 16:21:50.375: INFO: stderr: "" Mar 8 16:21:50.375: INFO: stdout: "e2e-test-crd-publish-openapi-4035-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 16:21:50.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2174 delete e2e-test-crd-publish-openapi-4035-crds test-cr' Mar 8 16:21:50.465: INFO: stderr: "" Mar 8 16:21:50.466: INFO: stdout: "e2e-test-crd-publish-openapi-4035-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 16:21:50.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4035-crds' Mar 8 16:21:50.698: INFO: stderr: "" Mar 8 16:21:50.698: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4035-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:21:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2174" for this suite. • [SLOW TEST:9.115 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":278,"skipped":4537,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:21:53.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 16:21:53.724: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7688 /api/v1/namespaces/watch-7688/configmaps/e2e-watch-test-watch-closed aac81faf-bcd3-4659-a6c1-e73335236681 33546 0 2020-03-08 16:21:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:21:53.724: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7688 /api/v1/namespaces/watch-7688/configmaps/e2e-watch-test-watch-closed aac81faf-bcd3-4659-a6c1-e73335236681 33547 0 2020-03-08 16:21:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 16:21:53.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7688 /api/v1/namespaces/watch-7688/configmaps/e2e-watch-test-watch-closed aac81faf-bcd3-4659-a6c1-e73335236681 33548 0 2020-03-08 16:21:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 16:21:53.764: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7688 /api/v1/namespaces/watch-7688/configmaps/e2e-watch-test-watch-closed aac81faf-bcd3-4659-a6c1-e73335236681 33549 0 2020-03-08 16:21:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:21:53.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7688" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":279,"skipped":4545,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 16:21:53.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 16:21:53.819: INFO: Waiting up to 5m0s for pod "downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607" in namespace "downward-api-7833" to be "success or failure" Mar 8 16:21:53.823: INFO: Pod "downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438023ms Mar 8 16:21:55.840: INFO: Pod "downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021439847s STEP: Saw pod success Mar 8 16:21:55.840: INFO: Pod "downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607" satisfied condition "success or failure" Mar 8 16:21:55.843: INFO: Trying to get logs from node latest-worker pod downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607 container dapi-container: STEP: delete the pod Mar 8 16:21:55.876: INFO: Waiting for pod downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607 to disappear Mar 8 16:21:55.883: INFO: Pod downward-api-fa0b56eb-25c5-40e9-a0ed-9ec65449f607 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 16:21:55.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7833" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":280,"skipped":4549,"failed":0} SSSSSSSSSSSSSSSSMar 8 16:21:55.890: INFO: Running AfterSuite actions on all nodes Mar 8 16:21:55.890: INFO: Running AfterSuite actions on node 1 Mar 8 16:21:55.890: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":280,"completed":280,"skipped":4565,"failed":0} Ran 280 of 4845 Specs in 4343.071 seconds SUCCESS! -- 280 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS