I0308 23:36:56.036272 7 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0308 23:36:56.036454 7 e2e.go:109] Starting e2e run "eea2ee52-965b-4dce-bea2-244956469237" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583710614 - Will randomize all specs Will run 280 of 4845 specs Mar 8 23:36:56.101: INFO: >>> kubeConfig: /root/.kube/config Mar 8 23:36:56.105: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 23:36:56.128: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 23:36:56.158: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 23:36:56.158: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 23:36:56.158: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 23:36:56.164: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 23:36:56.164: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 23:36:56.164: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Mar 8 23:36:56.164: INFO: kube-apiserver version: v1.17.0 Mar 8 23:36:56.165: INFO: >>> kubeConfig: /root/.kube/config Mar 8 23:36:56.169: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:36:56.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Mar 8 23:36:56.237: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Mar 8 23:36:56.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=kubectl-9347 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 8 23:36:59.471: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0308 23:36:59.421420 29 log.go:172] (0xc0000fe2c0) (0xc000a600a0) Create stream\nI0308 23:36:59.421466 29 log.go:172] (0xc0000fe2c0) (0xc000a600a0) Stream added, broadcasting: 1\nI0308 23:36:59.423874 29 log.go:172] (0xc0000fe2c0) Reply frame received for 1\nI0308 23:36:59.423905 29 log.go:172] (0xc0000fe2c0) (0xc0006a7b80) Create stream\nI0308 23:36:59.423917 29 log.go:172] (0xc0000fe2c0) (0xc0006a7b80) Stream added, broadcasting: 3\nI0308 23:36:59.425693 29 log.go:172] (0xc0000fe2c0) Reply frame received for 3\nI0308 23:36:59.425742 29 log.go:172] (0xc0000fe2c0) (0xc0006a7cc0) Create stream\nI0308 23:36:59.425758 29 log.go:172] (0xc0000fe2c0) (0xc0006a7cc0) Stream added, broadcasting: 5\nI0308 23:36:59.426689 29 log.go:172] (0xc0000fe2c0) Reply frame received for 5\nI0308 23:36:59.426723 29 log.go:172] (0xc0000fe2c0) (0xc0006a7d60) Create stream\nI0308 23:36:59.426732 29 log.go:172] (0xc0000fe2c0) (0xc0006a7d60) Stream added, broadcasting: 7\nI0308 23:36:59.427552 29 log.go:172] (0xc0000fe2c0) Reply frame received for 7\nI0308 23:36:59.427708 29 log.go:172] (0xc0006a7b80) (3) Writing data frame\nI0308 23:36:59.427824 29 log.go:172] (0xc0006a7b80) (3) Writing data frame\nI0308 23:36:59.428743 29 log.go:172] (0xc0000fe2c0) Data frame received for 5\nI0308 23:36:59.428760 29 log.go:172] (0xc0006a7cc0) (5) Data frame handling\nI0308 23:36:59.428777 29 log.go:172] (0xc0006a7cc0) (5) Data frame sent\nI0308 23:36:59.429069 29 log.go:172] (0xc0000fe2c0) Data frame received for 5\nI0308 23:36:59.429090 29 log.go:172] (0xc0006a7cc0) (5) Data frame handling\nI0308 23:36:59.429104 29 log.go:172] (0xc0006a7cc0) (5) Data frame sent\nI0308 23:36:59.447407 29 log.go:172] (0xc0000fe2c0) Data frame received for 5\nI0308 23:36:59.447438 29 log.go:172] (0xc0006a7cc0) (5) Data frame handling\nI0308 23:36:59.447458 29 log.go:172] (0xc0000fe2c0) Data frame received for 7\nI0308 23:36:59.447468 29 log.go:172] (0xc0006a7d60) (7) Data frame handling\nI0308 23:36:59.447728 29 log.go:172] (0xc0000fe2c0) Data frame received for 1\nI0308 23:36:59.447744 29 log.go:172] (0xc000a600a0) (1) Data frame handling\nI0308 23:36:59.447761 29 log.go:172] (0xc000a600a0) (1) Data frame sent\nI0308 23:36:59.447813 29 log.go:172] (0xc0000fe2c0) (0xc000a600a0) Stream removed, broadcasting: 1\nI0308 23:36:59.447936 29 log.go:172] (0xc0000fe2c0) (0xc0006a7b80) Stream removed, broadcasting: 3\nI0308 23:36:59.448060 29 log.go:172] (0xc0000fe2c0) (0xc000a600a0) Stream removed, broadcasting: 1\nI0308 23:36:59.448077 29 log.go:172] (0xc0000fe2c0) (0xc0006a7b80) Stream removed, broadcasting: 3\nI0308 23:36:59.448083 29 log.go:172] (0xc0000fe2c0) (0xc0006a7cc0) Stream removed, broadcasting: 5\nI0308 23:36:59.448092 29 log.go:172] (0xc0000fe2c0) (0xc0006a7d60) Stream removed, broadcasting: 7\nI0308 23:36:59.448191 29 log.go:172] (0xc0000fe2c0) Go away received\n" Mar 8 23:36:59.471: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:01.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9347" for this suite. • [SLOW TEST:5.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":280,"completed":1,"skipped":29,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:01.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-9130/secret-test-d309e6e6-02cd-4df4-81c7-c0a464986995 STEP: Creating a pod to test consume secrets Mar 8 23:37:01.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c" in namespace "secrets-9130" to be "success or failure" Mar 8 23:37:01.580: INFO: Pod "pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.543207ms Mar 8 23:37:03.584: INFO: Pod "pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016435289s STEP: Saw pod success Mar 8 23:37:03.584: INFO: Pod "pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c" satisfied condition "success or failure" Mar 8 23:37:03.587: INFO: Trying to get logs from node latest-worker pod pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c container env-test: STEP: delete the pod Mar 8 23:37:03.617: INFO: Waiting for pod pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c to disappear Mar 8 23:37:03.621: INFO: Pod pod-configmaps-477db897-a12f-47fa-8abb-2c54d134401c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9130" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":29,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:03.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-492fe71b-8c69-49c5-bfbb-d978969cc3db STEP: Creating a pod to test consume secrets Mar 8 23:37:03.719: INFO: Waiting up to 5m0s for pod "pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713" in namespace "secrets-3946" to be "success or failure" Mar 8 23:37:03.737: INFO: Pod "pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713": Phase="Pending", Reason="", readiness=false. Elapsed: 17.767763ms Mar 8 23:37:05.742: INFO: Pod "pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022260088s STEP: Saw pod success Mar 8 23:37:05.742: INFO: Pod "pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713" satisfied condition "success or failure" Mar 8 23:37:05.745: INFO: Trying to get logs from node latest-worker pod pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713 container secret-volume-test: STEP: delete the pod Mar 8 23:37:05.773: INFO: Waiting for pod pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713 to disappear Mar 8 23:37:05.777: INFO: Pod pod-secrets-630f177c-a6c1-4bbd-9ce1-16f9b0d8e713 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:05.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3946" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:05.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 23:37:05.875: INFO: Waiting up to 5m0s for pod "pod-028aae94-548a-4953-bfe3-9d3a26f28114" in namespace "emptydir-2118" to be "success or failure" Mar 8 23:37:05.883: INFO: Pod "pod-028aae94-548a-4953-bfe3-9d3a26f28114": Phase="Pending", Reason="", readiness=false. Elapsed: 7.79611ms Mar 8 23:37:07.887: INFO: Pod "pod-028aae94-548a-4953-bfe3-9d3a26f28114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011796516s STEP: Saw pod success Mar 8 23:37:07.887: INFO: Pod "pod-028aae94-548a-4953-bfe3-9d3a26f28114" satisfied condition "success or failure" Mar 8 23:37:07.890: INFO: Trying to get logs from node latest-worker pod pod-028aae94-548a-4953-bfe3-9d3a26f28114 container test-container: STEP: delete the pod Mar 8 23:37:07.910: INFO: Waiting for pod pod-028aae94-548a-4953-bfe3-9d3a26f28114 to disappear Mar 8 23:37:07.915: INFO: Pod pod-028aae94-548a-4953-bfe3-9d3a26f28114 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:07.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2118" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:07.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Mar 8 23:37:08.004: INFO: Waiting up to 5m0s for pod "client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb" in namespace "containers-242" to be "success or failure" Mar 8 23:37:08.031: INFO: Pod "client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.251592ms Mar 8 23:37:10.035: INFO: Pod "client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030770717s STEP: Saw pod success Mar 8 23:37:10.035: INFO: Pod "client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb" satisfied condition "success or failure" Mar 8 23:37:10.038: INFO: Trying to get logs from node latest-worker pod client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb container test-container: STEP: delete the pod Mar 8 23:37:10.059: INFO: Waiting for pod client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb to disappear Mar 8 23:37:10.064: INFO: Pod client-containers-5b6e24a7-6708-4cc1-80ee-8838047431bb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:10.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-242" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":76,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:10.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 23:37:10.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6922' Mar 8 23:37:10.274: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 23:37:10.274: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 8 23:37:10.316: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-5ltj4] Mar 8 23:37:10.316: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-5ltj4" in namespace "kubectl-6922" to be "running and ready" Mar 8 23:37:10.319: INFO: Pod "e2e-test-httpd-rc-5ltj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962117ms Mar 8 23:37:12.323: INFO: Pod "e2e-test-httpd-rc-5ltj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00679053s Mar 8 23:37:14.327: INFO: Pod "e2e-test-httpd-rc-5ltj4": Phase="Running", Reason="", readiness=true. Elapsed: 4.010893422s Mar 8 23:37:14.327: INFO: Pod "e2e-test-httpd-rc-5ltj4" satisfied condition "running and ready" Mar 8 23:37:14.327: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-5ltj4] Mar 8 23:37:14.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6922' Mar 8 23:37:14.481: INFO: stderr: "" Mar 8 23:37:14.481: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.192. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.192. Set the 'ServerName' directive globally to suppress this message\n[Sun Mar 08 23:37:11.513146 2020] [mpm_event:notice] [pid 1:tid 140550002244456] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Mar 08 23:37:11.513203 2020] [core:notice] [pid 1:tid 140550002244456] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Mar 8 23:37:14.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6922' Mar 8 23:37:14.640: INFO: stderr: "" Mar 8 23:37:14.640: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:14.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6922" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":6,"skipped":76,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:14.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:37:14.703: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 8 23:37:16.752: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:17.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9690" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":7,"skipped":87,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:17.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:37:18.905: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 23:37:18.928: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 23:37:23.933: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 23:37:23.933: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 23:37:23.937: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 23:37:23.948: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 23:37:25.954: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 23:37:25.956: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 23:37:25.964: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6001 /apis/apps/v1/namespaces/deployment-6001/deployments/test-rolling-update-deployment 308a6654-d3d6-4306-bef9-341a02e4abe4 127637 1 2020-03-08 23:37:23 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e7f728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 23:37:23 +0000 UTC,LastTransitionTime:2020-03-08 23:37:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-08 23:37:25 +0000 UTC,LastTransitionTime:2020-03-08 23:37:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 23:37:25.967: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6001 /apis/apps/v1/namespaces/deployment-6001/replicasets/test-rolling-update-deployment-67cf4f6444 dd6fc838-d5d9-4c4e-a3b2-bad1f60871ac 127626 1 2020-03-08 23:37:23 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 308a6654-d3d6-4306-bef9-341a02e4abe4 0xc002e7fba7 0xc002e7fba8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e7fc18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:37:25.967: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 23:37:25.967: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6001 /apis/apps/v1/namespaces/deployment-6001/replicasets/test-rolling-update-controller 343be849-efa2-4b95-ae63-e7686992ac14 127635 2 2020-03-08 23:37:18 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 308a6654-d3d6-4306-bef9-341a02e4abe4 0xc002e7fad7 0xc002e7fad8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e7fb38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:37:25.970: INFO: Pod "test-rolling-update-deployment-67cf4f6444-kn685" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-kn685 test-rolling-update-deployment-67cf4f6444- deployment-6001 /api/v1/namespaces/deployment-6001/pods/test-rolling-update-deployment-67cf4f6444-kn685 b2162416-96ca-4ec6-a648-c7303c5b012a 127625 0 2020-03-08 23:37:23 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 dd6fc838-d5d9-4c4e-a3b2-bad1f60871ac 0xc00293fbb7 0xc00293fbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-grj2z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-grj2z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-grj2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:37:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:37:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:37:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:37:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.194,StartTime:2020-03-08 23:37:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:37:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9033629514ab6a41be9db6bcdced8251ffb26bc614998bd85fedbbc92e2ee0db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:25.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6001" for this suite. • [SLOW TEST:8.205 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":8,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:25.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-2554 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2554 to expose endpoints map[] Mar 8 23:37:26.084: INFO: Get endpoints failed (3.692471ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 23:37:27.088: INFO: successfully validated that service endpoint-test2 in namespace services-2554 exposes endpoints map[] (1.007736658s elapsed) STEP: Creating pod pod1 in namespace services-2554 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2554 to expose endpoints map[pod1:[80]] Mar 8 23:37:29.179: INFO: successfully validated that service endpoint-test2 in namespace services-2554 exposes endpoints map[pod1:[80]] (2.083805773s elapsed) STEP: Creating pod pod2 in namespace services-2554 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2554 to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 23:37:32.253: INFO: successfully validated that service endpoint-test2 in namespace services-2554 exposes endpoints map[pod1:[80] pod2:[80]] (3.069637694s elapsed) STEP: Deleting pod pod1 in namespace services-2554 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2554 to expose endpoints map[pod2:[80]] Mar 8 23:37:33.292: INFO: successfully validated that service endpoint-test2 in namespace services-2554 exposes endpoints map[pod2:[80]] (1.035626355s elapsed) STEP: Deleting pod pod2 in namespace services-2554 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2554 to expose endpoints map[] Mar 8 23:37:33.305: INFO: successfully validated that service endpoint-test2 in namespace services-2554 exposes endpoints map[] (9.566878ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:33.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2554" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:7.391 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":280,"completed":9,"skipped":115,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:33.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:49.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9017" for this suite. • [SLOW TEST:16.092 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":10,"skipped":124,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:49.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-74e30e85-f78e-490e-8fc3-e669889b2d4f STEP: Creating a pod to test consume secrets Mar 8 23:37:49.691: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb" in namespace "projected-2103" to be "success or failure" Mar 8 23:37:49.700: INFO: Pod "pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.250155ms Mar 8 23:37:51.709: INFO: Pod "pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018397111s STEP: Saw pod success Mar 8 23:37:51.709: INFO: Pod "pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb" satisfied condition "success or failure" Mar 8 23:37:51.712: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb container projected-secret-volume-test: STEP: delete the pod Mar 8 23:37:51.763: INFO: Waiting for pod pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb to disappear Mar 8 23:37:51.772: INFO: Pod pod-projected-secrets-087871a2-a7a6-4b8a-a41a-f3b844cd03eb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:51.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2103" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:51.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 23:37:51.924: INFO: Waiting up to 5m0s for pod "downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4" in namespace "downward-api-6632" to be "success or failure" Mar 8 23:37:51.940: INFO: Pod "downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.602696ms Mar 8 23:37:53.943: INFO: Pod "downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01897336s Mar 8 23:37:55.947: INFO: Pod "downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022643971s STEP: Saw pod success Mar 8 23:37:55.947: INFO: Pod "downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4" satisfied condition "success or failure" Mar 8 23:37:55.950: INFO: Trying to get logs from node latest-worker pod downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4 container dapi-container: STEP: delete the pod Mar 8 23:37:55.972: INFO: Waiting for pod downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4 to disappear Mar 8 23:37:56.009: INFO: Pod downward-api-d9d9c5da-8338-4d4e-8e3a-0a90a183f7e4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:56.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6632" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":12,"skipped":155,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:37:56.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:37:56.724: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:37:59.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 8 23:37:59.803: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:37:59.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1500" for this suite. STEP: Destroying namespace "webhook-1500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":13,"skipped":164,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:00.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:38:00.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d" in namespace "projected-8308" to be "success or failure" Mar 8 23:38:00.245: INFO: Pod "downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.119107ms Mar 8 23:38:02.249: INFO: Pod "downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010662101s STEP: Saw pod success Mar 8 23:38:02.249: INFO: Pod "downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d" satisfied condition "success or failure" Mar 8 23:38:02.252: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d container client-container: STEP: delete the pod Mar 8 23:38:02.294: INFO: Waiting for pod downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d to disappear Mar 8 23:38:02.299: INFO: Pod downwardapi-volume-062e3b22-fb0f-479d-93fd-b9f20f1e7a7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:38:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8308" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":14,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:02.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 8 23:38:02.382: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:38:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3000" for this suite. • [SLOW TEST:16.981 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":15,"skipped":197,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:19.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-4059 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 23:38:19.348: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 23:38:19.398: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 23:38:21.402: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 23:38:23.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:25.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:27.429: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:29.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:31.408: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:33.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:35.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:37.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 23:38:39.405: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 23:38:39.413: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 23:38:41.418: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 23:38:45.471: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.200 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 23:38:45.471: INFO: >>> kubeConfig: /root/.kube/config I0308 23:38:45.509684 7 log.go:172] (0xc00409a2c0) (0xc0028fd400) Create stream I0308 23:38:45.509716 7 log.go:172] (0xc00409a2c0) (0xc0028fd400) Stream added, broadcasting: 1 I0308 23:38:45.515865 7 log.go:172] (0xc00409a2c0) Reply frame received for 1 I0308 23:38:45.515905 7 log.go:172] (0xc00409a2c0) (0xc0023e2000) Create stream I0308 23:38:45.515917 7 log.go:172] (0xc00409a2c0) (0xc0023e2000) Stream added, broadcasting: 3 I0308 23:38:45.517004 7 log.go:172] (0xc00409a2c0) Reply frame received for 3 I0308 23:38:45.517043 7 log.go:172] (0xc00409a2c0) (0xc0023e20a0) Create stream I0308 23:38:45.517067 7 log.go:172] (0xc00409a2c0) (0xc0023e20a0) Stream added, broadcasting: 5 I0308 23:38:45.518000 7 log.go:172] (0xc00409a2c0) Reply frame received for 5 I0308 23:38:46.571172 7 log.go:172] (0xc00409a2c0) Data frame received for 3 I0308 23:38:46.571200 7 log.go:172] (0xc0023e2000) (3) Data frame handling I0308 23:38:46.571238 7 log.go:172] (0xc0023e2000) (3) Data frame sent I0308 23:38:46.571247 7 log.go:172] (0xc00409a2c0) Data frame received for 3 I0308 23:38:46.571254 7 log.go:172] (0xc0023e2000) (3) Data frame handling I0308 23:38:46.571431 7 log.go:172] (0xc00409a2c0) Data frame received for 5 I0308 23:38:46.571453 7 log.go:172] (0xc0023e20a0) (5) Data frame handling I0308 23:38:46.573922 7 log.go:172] (0xc00409a2c0) Data frame received for 1 I0308 23:38:46.573966 7 log.go:172] (0xc0028fd400) (1) Data frame handling I0308 23:38:46.573998 7 log.go:172] (0xc0028fd400) (1) Data frame sent I0308 23:38:46.574025 7 log.go:172] (0xc00409a2c0) (0xc0028fd400) Stream removed, broadcasting: 1 I0308 23:38:46.574054 7 log.go:172] (0xc00409a2c0) Go away received I0308 23:38:46.574487 7 log.go:172] (0xc00409a2c0) (0xc0028fd400) Stream removed, broadcasting: 1 I0308 23:38:46.574509 7 log.go:172] (0xc00409a2c0) (0xc0023e2000) Stream removed, broadcasting: 3 I0308 23:38:46.574522 7 log.go:172] (0xc00409a2c0) (0xc0023e20a0) Stream removed, broadcasting: 5 Mar 8 23:38:46.574: INFO: Found all expected endpoints: [netserver-0] Mar 8 23:38:46.577: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.32 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 23:38:46.577: INFO: >>> kubeConfig: /root/.kube/config I0308 23:38:46.605192 7 log.go:172] (0xc00409a840) (0xc0028fd5e0) Create stream I0308 23:38:46.605220 7 log.go:172] (0xc00409a840) (0xc0028fd5e0) Stream added, broadcasting: 1 I0308 23:38:46.607926 7 log.go:172] (0xc00409a840) Reply frame received for 1 I0308 23:38:46.607978 7 log.go:172] (0xc00409a840) (0xc0028fd680) Create stream I0308 23:38:46.607996 7 log.go:172] (0xc00409a840) (0xc0028fd680) Stream added, broadcasting: 3 I0308 23:38:46.609066 7 log.go:172] (0xc00409a840) Reply frame received for 3 I0308 23:38:46.609099 7 log.go:172] (0xc00409a840) (0xc002416d20) Create stream I0308 23:38:46.609112 7 log.go:172] (0xc00409a840) (0xc002416d20) Stream added, broadcasting: 5 I0308 23:38:46.609967 7 log.go:172] (0xc00409a840) Reply frame received for 5 I0308 23:38:47.665101 7 log.go:172] (0xc00409a840) Data frame received for 3 I0308 23:38:47.665129 7 log.go:172] (0xc0028fd680) (3) Data frame handling I0308 23:38:47.665143 7 log.go:172] (0xc0028fd680) (3) Data frame sent I0308 23:38:47.665153 7 log.go:172] (0xc00409a840) Data frame received for 3 I0308 23:38:47.665170 7 log.go:172] (0xc0028fd680) (3) Data frame handling I0308 23:38:47.665764 7 log.go:172] (0xc00409a840) Data frame received for 5 I0308 23:38:47.665786 7 log.go:172] (0xc002416d20) (5) Data frame handling I0308 23:38:47.667724 7 log.go:172] (0xc00409a840) Data frame received for 1 I0308 23:38:47.667760 7 log.go:172] (0xc0028fd5e0) (1) Data frame handling I0308 23:38:47.667777 7 log.go:172] (0xc0028fd5e0) (1) Data frame sent I0308 23:38:47.667792 7 log.go:172] (0xc00409a840) (0xc0028fd5e0) Stream removed, broadcasting: 1 I0308 23:38:47.667835 7 log.go:172] (0xc00409a840) Go away received I0308 23:38:47.667948 7 log.go:172] (0xc00409a840) (0xc0028fd5e0) Stream removed, broadcasting: 1 I0308 23:38:47.667975 7 log.go:172] (0xc00409a840) (0xc0028fd680) Stream removed, broadcasting: 3 I0308 23:38:47.667988 7 log.go:172] (0xc00409a840) (0xc002416d20) Stream removed, broadcasting: 5 Mar 8 23:38:47.668: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:38:47.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4059" for this suite. • [SLOW TEST:28.389 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":16,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:47.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 23:38:47.774: INFO: Waiting up to 5m0s for pod "pod-21815e5a-13f9-4640-ac11-7718c50a2b7c" in namespace "emptydir-5058" to be "success or failure" Mar 8 23:38:47.780: INFO: Pod "pod-21815e5a-13f9-4640-ac11-7718c50a2b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033496ms Mar 8 23:38:49.783: INFO: Pod "pod-21815e5a-13f9-4640-ac11-7718c50a2b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009655493s STEP: Saw pod success Mar 8 23:38:49.783: INFO: Pod "pod-21815e5a-13f9-4640-ac11-7718c50a2b7c" satisfied condition "success or failure" Mar 8 23:38:49.786: INFO: Trying to get logs from node latest-worker pod pod-21815e5a-13f9-4640-ac11-7718c50a2b7c container test-container: STEP: delete the pod Mar 8 23:38:49.817: INFO: Waiting for pod pod-21815e5a-13f9-4640-ac11-7718c50a2b7c to disappear Mar 8 23:38:49.821: INFO: Pod pod-21815e5a-13f9-4640-ac11-7718c50a2b7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:38:49.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":17,"skipped":224,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:49.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:38:50.023: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ce5bd3f9-8f79-4df9-af6d-6dbf9669b3cb", Controller:(*bool)(0xc0031f2a3a), BlockOwnerDeletion:(*bool)(0xc0031f2a3b)}} Mar 8 23:38:50.032: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"db1942ac-5f67-4cb5-a59d-42c3c808aa61", Controller:(*bool)(0xc0031be2ba), BlockOwnerDeletion:(*bool)(0xc0031be2bb)}} Mar 8 23:38:50.058: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b511c348-9684-4b2f-b141-91c4de624a7e", Controller:(*bool)(0xc0031a1b8a), BlockOwnerDeletion:(*bool)(0xc0031a1b8b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:38:55.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5711" for this suite. • [SLOW TEST:5.271 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":18,"skipped":227,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:38:55.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Mar 8 23:38:55.157: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 8 23:38:55.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:55.493: INFO: stderr: "" Mar 8 23:38:55.493: INFO: stdout: "service/agnhost-slave created\n" Mar 8 23:38:55.493: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 8 23:38:55.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:55.764: INFO: stderr: "" Mar 8 23:38:55.764: INFO: stdout: "service/agnhost-master created\n" Mar 8 23:38:55.764: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 23:38:55.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:56.040: INFO: stderr: "" Mar 8 23:38:56.040: INFO: stdout: "service/frontend created\n" Mar 8 23:38:56.040: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 8 23:38:56.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:56.263: INFO: stderr: "" Mar 8 23:38:56.263: INFO: stdout: "deployment.apps/frontend created\n" Mar 8 23:38:56.263: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 23:38:56.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:56.524: INFO: stderr: "" Mar 8 23:38:56.524: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 8 23:38:56.524: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 23:38:56.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8127' Mar 8 23:38:56.749: INFO: stderr: "" Mar 8 23:38:56.749: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 8 23:38:56.749: INFO: Waiting for all frontend pods to be Running. Mar 8 23:39:01.799: INFO: Waiting for frontend to serve content. Mar 8 23:39:01.811: INFO: Trying to add a new entry to the guestbook. Mar 8 23:39:01.821: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 23:39:01.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:01.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:01.983: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 23:39:01.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:02.114: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:02.114: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 23:39:02.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:02.196: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:02.196: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 23:39:02.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:02.262: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:02.262: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 23:39:02.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:02.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:02.339: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 23:39:02.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8127' Mar 8 23:39:02.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 23:39:02.417: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:02.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8127" for this suite. • [SLOW TEST:7.323 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":19,"skipped":235,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:02.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 23:39:08.494768 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 23:39:08.494: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:08.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7101" for this suite. • [SLOW TEST:6.079 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":20,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:08.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 23:39:13.088: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bd1b2d83-de4b-4940-abe4-cfed8bb4beef" Mar 8 23:39:13.088: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bd1b2d83-de4b-4940-abe4-cfed8bb4beef" in namespace "pods-2771" to be "terminated due to deadline exceeded" Mar 8 23:39:13.124: INFO: Pod "pod-update-activedeadlineseconds-bd1b2d83-de4b-4940-abe4-cfed8bb4beef": Phase="Running", Reason="", readiness=true. Elapsed: 35.857796ms Mar 8 23:39:15.127: INFO: Pod "pod-update-activedeadlineseconds-bd1b2d83-de4b-4940-abe4-cfed8bb4beef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.039477506s Mar 8 23:39:15.127: INFO: Pod "pod-update-activedeadlineseconds-bd1b2d83-de4b-4940-abe4-cfed8bb4beef" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:15.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2771" for this suite. • [SLOW TEST:6.634 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":261,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:15.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override arguments Mar 8 23:39:15.210: INFO: Waiting up to 5m0s for pod "client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75" in namespace "containers-2366" to be "success or failure" Mar 8 23:39:15.219: INFO: Pod "client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.956009ms Mar 8 23:39:17.223: INFO: Pod "client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013000826s STEP: Saw pod success Mar 8 23:39:17.223: INFO: Pod "client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75" satisfied condition "success or failure" Mar 8 23:39:17.225: INFO: Trying to get logs from node latest-worker pod client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75 container test-container: STEP: delete the pod Mar 8 23:39:17.250: INFO: Waiting for pod client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75 to disappear Mar 8 23:39:17.255: INFO: Pod client-containers-93f934ab-273a-4cf4-aaab-cbd6a60c7d75 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:17.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2366" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":267,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:17.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-77d3dd0d-f9f6-4545-a0d1-42ad35a33e00 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:17.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8923" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":23,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:17.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-4161fb6f-8263-49b8-b6cb-e3ec0f8a727c STEP: Creating a pod to test consume configMaps Mar 8 23:39:17.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0353a77-7060-41ee-9232-994054635a64" in namespace "configmap-6860" to be "success or failure" Mar 8 23:39:17.447: INFO: Pod "pod-configmaps-a0353a77-7060-41ee-9232-994054635a64": Phase="Pending", Reason="", readiness=false. Elapsed: 41.363209ms Mar 8 23:39:19.451: INFO: Pod "pod-configmaps-a0353a77-7060-41ee-9232-994054635a64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044990269s STEP: Saw pod success Mar 8 23:39:19.451: INFO: Pod "pod-configmaps-a0353a77-7060-41ee-9232-994054635a64" satisfied condition "success or failure" Mar 8 23:39:19.454: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a0353a77-7060-41ee-9232-994054635a64 container configmap-volume-test: STEP: delete the pod Mar 8 23:39:19.472: INFO: Waiting for pod pod-configmaps-a0353a77-7060-41ee-9232-994054635a64 to disappear Mar 8 23:39:19.477: INFO: Pod pod-configmaps-a0353a77-7060-41ee-9232-994054635a64 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:19.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6860" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":24,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:19.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:39:19.955: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:39:21.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307560, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:39:24.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:25.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-205" for this suite. STEP: Destroying namespace "webhook-205-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.886 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":25,"skipped":351,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:25.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:39:25.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3" in namespace "projected-9323" to be "success or failure" Mar 8 23:39:25.477: INFO: Pod "downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.750909ms Mar 8 23:39:27.481: INFO: Pod "downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0197286s STEP: Saw pod success Mar 8 23:39:27.481: INFO: Pod "downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3" satisfied condition "success or failure" Mar 8 23:39:27.484: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3 container client-container: STEP: delete the pod Mar 8 23:39:27.982: INFO: Waiting for pod downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3 to disappear Mar 8 23:39:28.010: INFO: Pod downwardapi-volume-1af095e5-9a9a-42a9-ac51-27b7a97df9b3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:28.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9323" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":26,"skipped":366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:28.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:39:28.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e" in namespace "projected-8304" to be "success or failure" Mar 8 23:39:28.197: INFO: Pod "downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.421876ms Mar 8 23:39:30.200: INFO: Pod "downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032739724s STEP: Saw pod success Mar 8 23:39:30.200: INFO: Pod "downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e" satisfied condition "success or failure" Mar 8 23:39:30.203: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e container client-container: STEP: delete the pod Mar 8 23:39:30.232: INFO: Waiting for pod downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e to disappear Mar 8 23:39:30.237: INFO: Pod downwardapi-volume-b188b260-4188-4fcf-b933-da8ef5e79e7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:30.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8304" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":405,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:30.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 23:39:30.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-521' Mar 8 23:39:30.389: INFO: stderr: "" Mar 8 23:39:30.389: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 8 23:39:35.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-521 -o json' Mar 8 23:39:35.503: INFO: stderr: "" Mar 8 23:39:35.503: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T23:39:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-521\",\n \"resourceVersion\": \"129004\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-521/pods/e2e-test-httpd-pod\",\n \"uid\": \"11c97291-50ca-488e-9cbd-0ced887c66b4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-km6z7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-km6z7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-km6z7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T23:39:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T23:39:31Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T23:39:31Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T23:39:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://663a5c78b6cf39121a9c24df14142b8fb77b77ca70e6e9d48186ef9f49bc61aa\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T23:39:31Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.221\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.221\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T23:39:30Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 23:39:35.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-521' Mar 8 23:39:35.685: INFO: stderr: "" Mar 8 23:39:35.685: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Mar 8 23:39:35.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-521' Mar 8 23:39:42.476: INFO: stderr: "" Mar 8 23:39:42.476: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-521" for this suite. • [SLOW TEST:12.253 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":28,"skipped":417,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:42.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-3f14bf53-1884-4a13-9fa4-2a2f5fa84651 STEP: Creating a pod to test consume configMaps Mar 8 23:39:42.586: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5" in namespace "configmap-101" to be "success or failure" Mar 8 23:39:42.591: INFO: Pod "pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512687ms Mar 8 23:39:44.595: INFO: Pod "pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008771926s STEP: Saw pod success Mar 8 23:39:44.595: INFO: Pod "pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5" satisfied condition "success or failure" Mar 8 23:39:44.598: INFO: Trying to get logs from node latest-worker pod pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5 container configmap-volume-test: STEP: delete the pod Mar 8 23:39:44.623: INFO: Waiting for pod pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5 to disappear Mar 8 23:39:44.627: INFO: Pod pod-configmaps-dfb09153-a5eb-46f5-a547-511d69f370c5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:44.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-101" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":29,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:44.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 23:39:45.322: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:39:48.361: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:39:48.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:49.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6736" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.082 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":30,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:49.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-9bb41a47-1c84-4be3-9be1-f2db4ba9062b STEP: Creating a pod to test consume configMaps Mar 8 23:39:49.844: INFO: Waiting up to 5m0s for pod "pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3" in namespace "configmap-8891" to be "success or failure" Mar 8 23:39:49.866: INFO: Pod "pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.963979ms Mar 8 23:39:51.872: INFO: Pod "pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02756714s Mar 8 23:39:53.876: INFO: Pod "pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031530127s STEP: Saw pod success Mar 8 23:39:53.876: INFO: Pod "pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3" satisfied condition "success or failure" Mar 8 23:39:53.879: INFO: Trying to get logs from node latest-worker pod pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3 container configmap-volume-test: STEP: delete the pod Mar 8 23:39:53.910: INFO: Waiting for pod pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3 to disappear Mar 8 23:39:53.932: INFO: Pod pod-configmaps-83ad2e30-8454-4ca9-ae26-12852283d4f3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:53.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8891" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":464,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:53.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 8 23:39:58.036: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4471 PodName:pod-sharedvolume-7a68b94c-3815-4a27-bbeb-8982fc7ff834 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 23:39:58.036: INFO: >>> kubeConfig: /root/.kube/config I0308 23:39:58.072793 7 log.go:172] (0xc0020e33f0) (0xc00234afa0) Create stream I0308 23:39:58.072835 7 log.go:172] (0xc0020e33f0) (0xc00234afa0) Stream added, broadcasting: 1 I0308 23:39:58.075019 7 log.go:172] (0xc0020e33f0) Reply frame received for 1 I0308 23:39:58.075065 7 log.go:172] (0xc0020e33f0) (0xc00234b040) Create stream I0308 23:39:58.075078 7 log.go:172] (0xc0020e33f0) (0xc00234b040) Stream added, broadcasting: 3 I0308 23:39:58.076028 7 log.go:172] (0xc0020e33f0) Reply frame received for 3 I0308 23:39:58.076063 7 log.go:172] (0xc0020e33f0) (0xc00234b0e0) Create stream I0308 23:39:58.076076 7 log.go:172] (0xc0020e33f0) (0xc00234b0e0) Stream added, broadcasting: 5 I0308 23:39:58.077111 7 log.go:172] (0xc0020e33f0) Reply frame received for 5 I0308 23:39:58.129331 7 log.go:172] (0xc0020e33f0) Data frame received for 3 I0308 23:39:58.129371 7 log.go:172] (0xc00234b040) (3) Data frame handling I0308 23:39:58.129386 7 log.go:172] (0xc00234b040) (3) Data frame sent I0308 23:39:58.129401 7 log.go:172] (0xc0020e33f0) Data frame received for 5 I0308 23:39:58.129411 7 log.go:172] (0xc00234b0e0) (5) Data frame handling I0308 23:39:58.129502 7 log.go:172] (0xc0020e33f0) Data frame received for 3 I0308 23:39:58.129519 7 log.go:172] (0xc00234b040) (3) Data frame handling I0308 23:39:58.131302 7 log.go:172] (0xc0020e33f0) Data frame received for 1 I0308 23:39:58.131327 7 log.go:172] (0xc00234afa0) (1) Data frame handling I0308 23:39:58.131345 7 log.go:172] (0xc00234afa0) (1) Data frame sent I0308 23:39:58.131362 7 log.go:172] (0xc0020e33f0) (0xc00234afa0) Stream removed, broadcasting: 1 I0308 23:39:58.131448 7 log.go:172] (0xc0020e33f0) (0xc00234afa0) Stream removed, broadcasting: 1 I0308 23:39:58.131467 7 log.go:172] (0xc0020e33f0) (0xc00234b040) Stream removed, broadcasting: 3 I0308 23:39:58.131487 7 log.go:172] (0xc0020e33f0) (0xc00234b0e0) Stream removed, broadcasting: 5 Mar 8 23:39:58.131: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:58.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0308 23:39:58.131800 7 log.go:172] (0xc0020e33f0) Go away received STEP: Destroying namespace "emptydir-4471" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":32,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:58.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Mar 8 23:39:58.210: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:58.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3827" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":280,"completed":33,"skipped":564,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:58.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 23:39:59.411795 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 23:39:59.411: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:59.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-692" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":34,"skipped":572,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:59.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:39:59.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1093" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":35,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:39:59.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:15.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2299" for this suite. • [SLOW TEST:16.254 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":36,"skipped":598,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:15.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:40:16.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:40:18.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307616, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307616, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307616, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719307616, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:40:21.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:40:21.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:22.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6640" for this suite. STEP: Destroying namespace "webhook-6640-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.198 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":37,"skipped":599,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:23.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-ce1f67f0-07b6-4c4e-aab3-d9e319237c54 STEP: Creating a pod to test consume secrets Mar 8 23:40:23.186: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261" in namespace "projected-8739" to be "success or failure" Mar 8 23:40:23.191: INFO: Pod "pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261": Phase="Pending", Reason="", readiness=false. Elapsed: 4.688199ms Mar 8 23:40:25.195: INFO: Pod "pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008921627s STEP: Saw pod success Mar 8 23:40:25.195: INFO: Pod "pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261" satisfied condition "success or failure" Mar 8 23:40:25.199: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261 container projected-secret-volume-test: STEP: delete the pod Mar 8 23:40:25.237: INFO: Waiting for pod pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261 to disappear Mar 8 23:40:25.245: INFO: Pod pod-projected-secrets-0c30ae9a-68e1-4c0f-9eab-66742b521261 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:25.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8739" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":38,"skipped":619,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:25.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 23:40:25.328: INFO: Waiting up to 5m0s for pod "pod-3db8dba0-bed8-485b-8380-de26ebe94910" in namespace "emptydir-5450" to be "success or failure" Mar 8 23:40:25.335: INFO: Pod "pod-3db8dba0-bed8-485b-8380-de26ebe94910": Phase="Pending", Reason="", readiness=false. Elapsed: 6.860156ms Mar 8 23:40:27.338: INFO: Pod "pod-3db8dba0-bed8-485b-8380-de26ebe94910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009873744s STEP: Saw pod success Mar 8 23:40:27.338: INFO: Pod "pod-3db8dba0-bed8-485b-8380-de26ebe94910" satisfied condition "success or failure" Mar 8 23:40:27.341: INFO: Trying to get logs from node latest-worker pod pod-3db8dba0-bed8-485b-8380-de26ebe94910 container test-container: STEP: delete the pod Mar 8 23:40:27.375: INFO: Waiting for pod pod-3db8dba0-bed8-485b-8380-de26ebe94910 to disappear Mar 8 23:40:27.383: INFO: Pod pod-3db8dba0-bed8-485b-8380-de26ebe94910 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:27.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5450" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":626,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:27.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 23:40:27.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129562 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:40:27.463: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129563 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:40:27.463: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129564 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 23:40:37.494: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129628 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:40:37.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129629 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:40:37.494: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6675 /api/v1/namespaces/watch-6675/configmaps/e2e-watch-test-label-changed ff9522aa-b1f8-4961-8906-e6bcc5fee571 129630 0 2020-03-08 23:40:27 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:37.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6675" for this suite. • [SLOW TEST:10.113 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":40,"skipped":627,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-662b6b75-903a-40ee-ad59-19d41be293e7 STEP: Creating a pod to test consume configMaps Mar 8 23:40:37.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966" in namespace "projected-3008" to be "success or failure" Mar 8 23:40:37.575: INFO: Pod "pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387836ms Mar 8 23:40:39.580: INFO: Pod "pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008107837s STEP: Saw pod success Mar 8 23:40:39.580: INFO: Pod "pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966" satisfied condition "success or failure" Mar 8 23:40:39.583: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966 container projected-configmap-volume-test: STEP: delete the pod Mar 8 23:40:39.613: INFO: Waiting for pod pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966 to disappear Mar 8 23:40:39.622: INFO: Pod pod-projected-configmaps-47408471-272e-43f5-9bb9-38dea0a7e966 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:39.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3008" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":628,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:39.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-6314a516-02ad-4ec7-9d32-98be3c8906ab STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:40:43.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1393" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":42,"skipped":639,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:40:43.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 23:40:44.348: INFO: Pod name wrapped-volume-race-749e2a7a-7904-415b-aecb-8780fccd1fe3: Found 0 pods out of 5 Mar 8 23:40:49.514: INFO: Pod name wrapped-volume-race-749e2a7a-7904-415b-aecb-8780fccd1fe3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-749e2a7a-7904-415b-aecb-8780fccd1fe3 in namespace emptydir-wrapper-2236, will wait for the garbage collector to delete the pods Mar 8 23:40:59.602: INFO: Deleting ReplicationController wrapped-volume-race-749e2a7a-7904-415b-aecb-8780fccd1fe3 took: 5.250981ms Mar 8 23:41:00.002: INFO: Terminating ReplicationController wrapped-volume-race-749e2a7a-7904-415b-aecb-8780fccd1fe3 pods took: 400.253869ms STEP: Creating RC which spawns configmap-volume pods Mar 8 23:41:05.980: INFO: Pod name wrapped-volume-race-2341c861-17ef-45fb-bdb1-2ff2a7c89250: Found 1 pods out of 5 Mar 8 23:41:10.987: INFO: Pod name wrapped-volume-race-2341c861-17ef-45fb-bdb1-2ff2a7c89250: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2341c861-17ef-45fb-bdb1-2ff2a7c89250 in namespace emptydir-wrapper-2236, will wait for the garbage collector to delete the pods Mar 8 23:41:23.125: INFO: Deleting ReplicationController wrapped-volume-race-2341c861-17ef-45fb-bdb1-2ff2a7c89250 took: 14.017493ms Mar 8 23:41:23.325: INFO: Terminating ReplicationController wrapped-volume-race-2341c861-17ef-45fb-bdb1-2ff2a7c89250 pods took: 200.262198ms STEP: Creating RC which spawns configmap-volume pods Mar 8 23:41:28.278: INFO: Pod name wrapped-volume-race-ea1edcdd-a418-4aca-84f3-fda878cb77a0: Found 0 pods out of 5 Mar 8 23:41:33.283: INFO: Pod name wrapped-volume-race-ea1edcdd-a418-4aca-84f3-fda878cb77a0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ea1edcdd-a418-4aca-84f3-fda878cb77a0 in namespace emptydir-wrapper-2236, will wait for the garbage collector to delete the pods Mar 8 23:41:43.362: INFO: Deleting ReplicationController wrapped-volume-race-ea1edcdd-a418-4aca-84f3-fda878cb77a0 took: 8.073035ms Mar 8 23:41:43.662: INFO: Terminating ReplicationController wrapped-volume-race-ea1edcdd-a418-4aca-84f3-fda878cb77a0 pods took: 300.28512ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:41:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2236" for this suite. • [SLOW TEST:68.945 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":43,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:41:52.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-475d10f6-9c08-4ff5-bdc9-e5c29d7a497e STEP: Creating a pod to test consume configMaps Mar 8 23:41:52.773: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d" in namespace "projected-1015" to be "success or failure" Mar 8 23:41:52.777: INFO: Pod "pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831774ms Mar 8 23:41:54.781: INFO: Pod "pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008516792s Mar 8 23:41:56.786: INFO: Pod "pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012786361s STEP: Saw pod success Mar 8 23:41:56.786: INFO: Pod "pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d" satisfied condition "success or failure" Mar 8 23:41:56.789: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d container projected-configmap-volume-test: STEP: delete the pod Mar 8 23:41:56.823: INFO: Waiting for pod pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d to disappear Mar 8 23:41:56.831: INFO: Pod pod-projected-configmaps-592fee59-a9b6-4672-9f3c-a09d2656da6d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:41:56.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1015" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":44,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:41:56.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-6f8dba43-3540-491c-90ed-f5cc7c8b8cb4 in namespace container-probe-5342 Mar 8 23:41:58.952: INFO: Started pod liveness-6f8dba43-3540-491c-90ed-f5cc7c8b8cb4 in namespace container-probe-5342 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 23:41:58.959: INFO: Initial restart count of pod liveness-6f8dba43-3540-491c-90ed-f5cc7c8b8cb4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:45:59.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5342" for this suite. • [SLOW TEST:243.029 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":45,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:45:59.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:45:59.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1210" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":280,"completed":46,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:01.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:01.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1902" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":47,"skipped":757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:01.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 23:46:04.312: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:04.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9602" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":48,"skipped":790,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:04.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-418 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-418 STEP: creating replication controller externalsvc in namespace services-418 I0308 23:46:04.520974 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-418, replica count: 2 I0308 23:46:07.571453 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 8 23:46:07.636: INFO: Creating new exec pod Mar 8 23:46:09.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-418 execpod8hdx5 -- /bin/sh -x -c nslookup nodeport-service' Mar 8 23:46:10.301: INFO: stderr: "I0308 23:46:10.221201 464 log.go:172] (0xc000a4b4a0) (0xc000a10820) Create stream\nI0308 23:46:10.221242 464 log.go:172] (0xc000a4b4a0) (0xc000a10820) Stream added, broadcasting: 1\nI0308 23:46:10.224740 464 log.go:172] (0xc000a4b4a0) Reply frame received for 1\nI0308 23:46:10.224783 464 log.go:172] (0xc000a4b4a0) (0xc00070c6e0) Create stream\nI0308 23:46:10.224793 464 log.go:172] (0xc000a4b4a0) (0xc00070c6e0) Stream added, broadcasting: 3\nI0308 23:46:10.225676 464 log.go:172] (0xc000a4b4a0) Reply frame received for 3\nI0308 23:46:10.225706 464 log.go:172] (0xc000a4b4a0) (0xc000549360) Create stream\nI0308 23:46:10.225716 464 log.go:172] (0xc000a4b4a0) (0xc000549360) Stream added, broadcasting: 5\nI0308 23:46:10.226637 464 log.go:172] (0xc000a4b4a0) Reply frame received for 5\nI0308 23:46:10.286769 464 log.go:172] (0xc000a4b4a0) Data frame received for 5\nI0308 23:46:10.286787 464 log.go:172] (0xc000549360) (5) Data frame handling\nI0308 23:46:10.286798 464 log.go:172] (0xc000549360) (5) Data frame sent\n+ nslookup nodeport-service\nI0308 23:46:10.292849 464 log.go:172] (0xc000a4b4a0) Data frame received for 3\nI0308 23:46:10.292863 464 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0308 23:46:10.292873 464 log.go:172] (0xc00070c6e0) (3) Data frame sent\nI0308 23:46:10.293539 464 log.go:172] (0xc000a4b4a0) Data frame received for 3\nI0308 23:46:10.293558 464 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0308 23:46:10.293571 464 log.go:172] (0xc00070c6e0) (3) Data frame sent\nI0308 23:46:10.294063 464 log.go:172] (0xc000a4b4a0) Data frame received for 5\nI0308 23:46:10.294072 464 log.go:172] (0xc000549360) (5) Data frame handling\nI0308 23:46:10.294081 464 log.go:172] (0xc000a4b4a0) Data frame received for 3\nI0308 23:46:10.294092 464 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0308 23:46:10.295237 464 log.go:172] (0xc000a4b4a0) Data frame received for 1\nI0308 23:46:10.295252 464 log.go:172] (0xc000a10820) (1) Data frame handling\nI0308 23:46:10.295261 464 log.go:172] (0xc000a10820) (1) Data frame sent\nI0308 23:46:10.295274 464 log.go:172] (0xc000a4b4a0) (0xc000a10820) Stream removed, broadcasting: 1\nI0308 23:46:10.295287 464 log.go:172] (0xc000a4b4a0) Go away received\nI0308 23:46:10.295596 464 log.go:172] (0xc000a4b4a0) (0xc000a10820) Stream removed, broadcasting: 1\nI0308 23:46:10.295616 464 log.go:172] (0xc000a4b4a0) (0xc00070c6e0) Stream removed, broadcasting: 3\nI0308 23:46:10.295627 464 log.go:172] (0xc000a4b4a0) (0xc000549360) Stream removed, broadcasting: 5\n" Mar 8 23:46:10.302: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-418.svc.cluster.local\tcanonical name = externalsvc.services-418.svc.cluster.local.\nName:\texternalsvc.services-418.svc.cluster.local\nAddress: 10.96.67.84\n\n" STEP: deleting ReplicationController externalsvc in namespace services-418, will wait for the garbage collector to delete the pods Mar 8 23:46:10.387: INFO: Deleting ReplicationController externalsvc took: 4.161497ms Mar 8 23:46:10.687: INFO: Terminating ReplicationController externalsvc pods took: 300.218379ms Mar 8 23:46:14.725: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:14.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-418" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.391 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":49,"skipped":805,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:14.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-68ccca61-c1e8-4439-b96b-78fc3c50acab STEP: Creating a pod to test consume secrets Mar 8 23:46:14.839: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7" in namespace "projected-4188" to be "success or failure" Mar 8 23:46:14.844: INFO: Pod "pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267815ms Mar 8 23:46:16.848: INFO: Pod "pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008644427s STEP: Saw pod success Mar 8 23:46:16.848: INFO: Pod "pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7" satisfied condition "success or failure" Mar 8 23:46:16.851: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7 container secret-volume-test: STEP: delete the pod Mar 8 23:46:16.904: INFO: Waiting for pod pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7 to disappear Mar 8 23:46:16.910: INFO: Pod pod-projected-secrets-f9207b35-51cf-47f1-96c9-818304efc6e7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:16.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4188" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":809,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:16.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:46:17.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:46:20.381: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:20.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8932" for this suite. STEP: Destroying namespace "webhook-8932-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":51,"skipped":823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:20.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Mar 8 23:46:23.254: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1327 pod-service-account-ef407442-722a-4a05-adb5-a3535e0fe916 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 8 23:46:23.448: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1327 pod-service-account-ef407442-722a-4a05-adb5-a3535e0fe916 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 8 23:46:23.624: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1327 pod-service-account-ef407442-722a-4a05-adb5-a3535e0fe916 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:23.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1327" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":52,"skipped":856,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:23.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:40.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3058" for this suite. • [SLOW TEST:17.111 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":53,"skipped":858,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:40.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:41.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3912" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":54,"skipped":863,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:41.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 23:46:41.095: INFO: Waiting up to 5m0s for pod "pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9" in namespace "emptydir-3457" to be "success or failure" Mar 8 23:46:41.100: INFO: Pod "pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458829ms Mar 8 23:46:43.104: INFO: Pod "pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008841478s STEP: Saw pod success Mar 8 23:46:43.104: INFO: Pod "pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9" satisfied condition "success or failure" Mar 8 23:46:43.108: INFO: Trying to get logs from node latest-worker pod pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9 container test-container: STEP: delete the pod Mar 8 23:46:43.151: INFO: Waiting for pod pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9 to disappear Mar 8 23:46:43.169: INFO: Pod pod-98f47ee9-1683-4996-8ccc-7d9e1abe1ad9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:43.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3457" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":870,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:43.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-3b095498-d6cb-4866-b6de-7ab5739eb44f STEP: Creating a pod to test consume secrets Mar 8 23:46:43.262: INFO: Waiting up to 5m0s for pod "pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d" in namespace "secrets-8735" to be "success or failure" Mar 8 23:46:43.267: INFO: Pod "pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.026041ms Mar 8 23:46:45.271: INFO: Pod "pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009297652s STEP: Saw pod success Mar 8 23:46:45.271: INFO: Pod "pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d" satisfied condition "success or failure" Mar 8 23:46:45.274: INFO: Trying to get logs from node latest-worker pod pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d container secret-volume-test: STEP: delete the pod Mar 8 23:46:45.292: INFO: Waiting for pod pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d to disappear Mar 8 23:46:45.297: INFO: Pod pod-secrets-ee83fe3c-3ae1-4d5a-a10a-b93ea242b30d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:45.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8735" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":877,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:45.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:46:45.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9608" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":57,"skipped":883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:46:45.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-9w98 STEP: Creating a pod to test atomic-volume-subpath Mar 8 23:46:45.520: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9w98" in namespace "subpath-315" to be "success or failure" Mar 8 23:46:45.531: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.893153ms Mar 8 23:46:47.535: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014669118s Mar 8 23:46:49.539: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 4.018591271s Mar 8 23:46:51.543: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 6.02252599s Mar 8 23:46:53.547: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 8.026467952s Mar 8 23:46:55.550: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 10.030370138s Mar 8 23:46:57.554: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 12.034282656s Mar 8 23:46:59.559: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 14.038818359s Mar 8 23:47:01.563: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 16.042664754s Mar 8 23:47:03.567: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 18.047351752s Mar 8 23:47:05.572: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 20.051964935s Mar 8 23:47:07.576: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Running", Reason="", readiness=true. Elapsed: 22.05614861s Mar 8 23:47:09.580: INFO: Pod "pod-subpath-test-downwardapi-9w98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060326551s STEP: Saw pod success Mar 8 23:47:09.581: INFO: Pod "pod-subpath-test-downwardapi-9w98" satisfied condition "success or failure" Mar 8 23:47:09.583: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-9w98 container test-container-subpath-downwardapi-9w98: STEP: delete the pod Mar 8 23:47:09.601: INFO: Waiting for pod pod-subpath-test-downwardapi-9w98 to disappear Mar 8 23:47:09.612: INFO: Pod pod-subpath-test-downwardapi-9w98 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9w98 Mar 8 23:47:09.612: INFO: Deleting pod "pod-subpath-test-downwardapi-9w98" in namespace "subpath-315" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:09.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-315" for this suite. • [SLOW TEST:24.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":58,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:09.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:47:09.670: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 23:47:12.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4619 create -f -' Mar 8 23:47:14.484: INFO: stderr: "" Mar 8 23:47:14.484: INFO: stdout: "e2e-test-crd-publish-openapi-8056-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 23:47:14.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4619 delete e2e-test-crd-publish-openapi-8056-crds test-cr' Mar 8 23:47:14.599: INFO: stderr: "" Mar 8 23:47:14.599: INFO: stdout: "e2e-test-crd-publish-openapi-8056-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 8 23:47:14.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4619 apply -f -' Mar 8 23:47:14.865: INFO: stderr: "" Mar 8 23:47:14.865: INFO: stdout: "e2e-test-crd-publish-openapi-8056-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 23:47:14.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4619 delete e2e-test-crd-publish-openapi-8056-crds test-cr' Mar 8 23:47:14.952: INFO: stderr: "" Mar 8 23:47:14.952: INFO: stdout: "e2e-test-crd-publish-openapi-8056-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 23:47:14.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8056-crds' Mar 8 23:47:15.232: INFO: stderr: "" Mar 8 23:47:15.232: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8056-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:17.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4619" for this suite. • [SLOW TEST:7.385 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":59,"skipped":950,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 8 23:47:17.078: INFO: Created pod &Pod{ObjectMeta:{dns-1719 dns-1719 /api/v1/namespaces/dns-1719/pods/dns-1719 207975ed-10b0-41d1-a4cc-1e681844e585 132149 0 2020-03-08 23:47:17 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qglw8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qglw8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qglw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:47:17.082: INFO: The status of Pod dns-1719 is Pending, waiting for it to be Running (with Ready = true) Mar 8 23:47:19.086: INFO: The status of Pod dns-1719 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 8 23:47:19.086: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1719 PodName:dns-1719 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 23:47:19.086: INFO: >>> kubeConfig: /root/.kube/config I0308 23:47:19.122646 7 log.go:172] (0xc001d451e0) (0xc002416a00) Create stream I0308 23:47:19.122678 7 log.go:172] (0xc001d451e0) (0xc002416a00) Stream added, broadcasting: 1 I0308 23:47:19.124478 7 log.go:172] (0xc001d451e0) Reply frame received for 1 I0308 23:47:19.124512 7 log.go:172] (0xc001d451e0) (0xc002416aa0) Create stream I0308 23:47:19.124524 7 log.go:172] (0xc001d451e0) (0xc002416aa0) Stream added, broadcasting: 3 I0308 23:47:19.125510 7 log.go:172] (0xc001d451e0) Reply frame received for 3 I0308 23:47:19.125544 7 log.go:172] (0xc001d451e0) (0xc002416b40) Create stream I0308 23:47:19.125557 7 log.go:172] (0xc001d451e0) (0xc002416b40) Stream added, broadcasting: 5 I0308 23:47:19.126600 7 log.go:172] (0xc001d451e0) Reply frame received for 5 I0308 23:47:19.208906 7 log.go:172] (0xc001d451e0) Data frame received for 3 I0308 23:47:19.208932 7 log.go:172] (0xc002416aa0) (3) Data frame handling I0308 23:47:19.208951 7 log.go:172] (0xc002416aa0) (3) Data frame sent I0308 23:47:19.209926 7 log.go:172] (0xc001d451e0) Data frame received for 3 I0308 23:47:19.209957 7 log.go:172] (0xc002416aa0) (3) Data frame handling I0308 23:47:19.210053 7 log.go:172] (0xc001d451e0) Data frame received for 5 I0308 23:47:19.210082 7 log.go:172] (0xc002416b40) (5) Data frame handling I0308 23:47:19.211782 7 log.go:172] (0xc001d451e0) Data frame received for 1 I0308 23:47:19.211812 7 log.go:172] (0xc002416a00) (1) Data frame handling I0308 23:47:19.211832 7 log.go:172] (0xc002416a00) (1) Data frame sent I0308 23:47:19.211851 7 log.go:172] (0xc001d451e0) (0xc002416a00) Stream removed, broadcasting: 1 I0308 23:47:19.211874 7 log.go:172] (0xc001d451e0) Go away received I0308 23:47:19.212131 7 log.go:172] (0xc001d451e0) (0xc002416a00) Stream removed, broadcasting: 1 I0308 23:47:19.212150 7 log.go:172] (0xc001d451e0) (0xc002416aa0) Stream removed, broadcasting: 3 I0308 23:47:19.212163 7 log.go:172] (0xc001d451e0) (0xc002416b40) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 8 23:47:19.212: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1719 PodName:dns-1719 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 23:47:19.212: INFO: >>> kubeConfig: /root/.kube/config I0308 23:47:19.249196 7 log.go:172] (0xc003f268f0) (0xc0023e2000) Create stream I0308 23:47:19.249219 7 log.go:172] (0xc003f268f0) (0xc0023e2000) Stream added, broadcasting: 1 I0308 23:47:19.251409 7 log.go:172] (0xc003f268f0) Reply frame received for 1 I0308 23:47:19.251457 7 log.go:172] (0xc003f268f0) (0xc0023e20a0) Create stream I0308 23:47:19.251480 7 log.go:172] (0xc003f268f0) (0xc0023e20a0) Stream added, broadcasting: 3 I0308 23:47:19.252422 7 log.go:172] (0xc003f268f0) Reply frame received for 3 I0308 23:47:19.252459 7 log.go:172] (0xc003f268f0) (0xc00234a8c0) Create stream I0308 23:47:19.252475 7 log.go:172] (0xc003f268f0) (0xc00234a8c0) Stream added, broadcasting: 5 I0308 23:47:19.253377 7 log.go:172] (0xc003f268f0) Reply frame received for 5 I0308 23:47:19.340754 7 log.go:172] (0xc003f268f0) Data frame received for 3 I0308 23:47:19.340782 7 log.go:172] (0xc0023e20a0) (3) Data frame handling I0308 23:47:19.340800 7 log.go:172] (0xc0023e20a0) (3) Data frame sent I0308 23:47:19.341272 7 log.go:172] (0xc003f268f0) Data frame received for 3 I0308 23:47:19.341296 7 log.go:172] (0xc0023e20a0) (3) Data frame handling I0308 23:47:19.341339 7 log.go:172] (0xc003f268f0) Data frame received for 5 I0308 23:47:19.341356 7 log.go:172] (0xc00234a8c0) (5) Data frame handling I0308 23:47:19.342542 7 log.go:172] (0xc003f268f0) Data frame received for 1 I0308 23:47:19.342570 7 log.go:172] (0xc0023e2000) (1) Data frame handling I0308 23:47:19.342586 7 log.go:172] (0xc0023e2000) (1) Data frame sent I0308 23:47:19.342600 7 log.go:172] (0xc003f268f0) (0xc0023e2000) Stream removed, broadcasting: 1 I0308 23:47:19.342625 7 log.go:172] (0xc003f268f0) Go away received I0308 23:47:19.342750 7 log.go:172] (0xc003f268f0) (0xc0023e2000) Stream removed, broadcasting: 1 I0308 23:47:19.342767 7 log.go:172] (0xc003f268f0) (0xc0023e20a0) Stream removed, broadcasting: 3 I0308 23:47:19.342780 7 log.go:172] (0xc003f268f0) (0xc00234a8c0) Stream removed, broadcasting: 5 Mar 8 23:47:19.342: INFO: Deleting pod dns-1719... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:19.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1719" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":60,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:19.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 23:47:19.406: INFO: Waiting up to 5m0s for pod "pod-96a19d58-ad52-40b1-80ee-6308bdb02524" in namespace "emptydir-7715" to be "success or failure" Mar 8 23:47:19.423: INFO: Pod "pod-96a19d58-ad52-40b1-80ee-6308bdb02524": Phase="Pending", Reason="", readiness=false. Elapsed: 16.757196ms Mar 8 23:47:21.427: INFO: Pod "pod-96a19d58-ad52-40b1-80ee-6308bdb02524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020808549s Mar 8 23:47:23.431: INFO: Pod "pod-96a19d58-ad52-40b1-80ee-6308bdb02524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024215717s STEP: Saw pod success Mar 8 23:47:23.431: INFO: Pod "pod-96a19d58-ad52-40b1-80ee-6308bdb02524" satisfied condition "success or failure" Mar 8 23:47:23.438: INFO: Trying to get logs from node latest-worker pod pod-96a19d58-ad52-40b1-80ee-6308bdb02524 container test-container: STEP: delete the pod Mar 8 23:47:23.458: INFO: Waiting for pod pod-96a19d58-ad52-40b1-80ee-6308bdb02524 to disappear Mar 8 23:47:23.462: INFO: Pod pod-96a19d58-ad52-40b1-80ee-6308bdb02524 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:23.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7715" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":61,"skipped":974,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:23.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 8 23:47:23.554: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:39.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7172" for this suite. • [SLOW TEST:15.550 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":62,"skipped":987,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:39.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 23:47:39.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9685' Mar 8 23:47:39.179: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 23:47:39.179: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Mar 8 23:47:39.216: INFO: scanned /root for discovery docs: Mar 8 23:47:39.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9685' Mar 8 23:47:55.055: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 23:47:55.055: INFO: stdout: "Created e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5\nScaling up e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 8 23:47:55.055: INFO: stdout: "Created e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5\nScaling up e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 8 23:47:55.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9685' Mar 8 23:47:55.168: INFO: stderr: "" Mar 8 23:47:55.168: INFO: stdout: "e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5-xspn7 " Mar 8 23:47:55.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5-xspn7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9685' Mar 8 23:47:55.276: INFO: stderr: "" Mar 8 23:47:55.276: INFO: stdout: "true" Mar 8 23:47:55.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5-xspn7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9685' Mar 8 23:47:55.346: INFO: stderr: "" Mar 8 23:47:55.346: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 8 23:47:55.346: INFO: e2e-test-httpd-rc-2f36d12554ac41f24f993503b7639ab5-xspn7 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Mar 8 23:47:55.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9685' Mar 8 23:47:55.442: INFO: stderr: "" Mar 8 23:47:55.442: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:47:55.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9685" for this suite. • [SLOW TEST:16.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":63,"skipped":993,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:47:55.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:48:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8088" for this suite. • [SLOW TEST:27.387 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":1013,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:48:22.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-3904 STEP: creating replication controller nodeport-test in namespace services-3904 I0308 23:48:23.010063 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3904, replica count: 2 I0308 23:48:26.060574 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 23:48:26.060: INFO: Creating new exec pod Mar 8 23:48:29.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-3904 execpodgl2jm -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 8 23:48:29.336: INFO: stderr: "I0308 23:48:29.262697 780 log.go:172] (0xc00003afd0) (0xc000a660a0) Create stream\nI0308 23:48:29.262751 780 log.go:172] (0xc00003afd0) (0xc000a660a0) Stream added, broadcasting: 1\nI0308 23:48:29.264848 780 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0308 23:48:29.264891 780 log.go:172] (0xc00003afd0) (0xc000a66140) Create stream\nI0308 23:48:29.264905 780 log.go:172] (0xc00003afd0) (0xc000a66140) Stream added, broadcasting: 3\nI0308 23:48:29.266049 780 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0308 23:48:29.266098 780 log.go:172] (0xc00003afd0) (0xc0005846e0) Create stream\nI0308 23:48:29.266154 780 log.go:172] (0xc00003afd0) (0xc0005846e0) Stream added, broadcasting: 5\nI0308 23:48:29.267101 780 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0308 23:48:29.329637 780 log.go:172] (0xc00003afd0) Data frame received for 5\nI0308 23:48:29.329655 780 log.go:172] (0xc0005846e0) (5) Data frame handling\nI0308 23:48:29.329665 780 log.go:172] (0xc0005846e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0308 23:48:29.330577 780 log.go:172] (0xc00003afd0) Data frame received for 5\nI0308 23:48:29.330596 780 log.go:172] (0xc0005846e0) (5) Data frame handling\nI0308 23:48:29.330612 780 log.go:172] (0xc0005846e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0308 23:48:29.331398 780 log.go:172] (0xc00003afd0) Data frame received for 3\nI0308 23:48:29.331435 780 log.go:172] (0xc000a66140) (3) Data frame handling\nI0308 23:48:29.331455 780 log.go:172] (0xc00003afd0) Data frame received for 5\nI0308 23:48:29.331467 780 log.go:172] (0xc0005846e0) (5) Data frame handling\nI0308 23:48:29.332894 780 log.go:172] (0xc00003afd0) Data frame received for 1\nI0308 23:48:29.332919 780 log.go:172] (0xc000a660a0) (1) Data frame handling\nI0308 23:48:29.332927 780 log.go:172] (0xc000a660a0) (1) Data frame sent\nI0308 23:48:29.332940 780 log.go:172] (0xc00003afd0) (0xc000a660a0) Stream removed, broadcasting: 1\nI0308 23:48:29.332956 780 log.go:172] (0xc00003afd0) Go away received\nI0308 23:48:29.333189 780 log.go:172] (0xc00003afd0) (0xc000a660a0) Stream removed, broadcasting: 1\nI0308 23:48:29.333208 780 log.go:172] (0xc00003afd0) (0xc000a66140) Stream removed, broadcasting: 3\nI0308 23:48:29.333219 780 log.go:172] (0xc00003afd0) (0xc0005846e0) Stream removed, broadcasting: 5\n" Mar 8 23:48:29.336: INFO: stdout: "" Mar 8 23:48:29.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-3904 execpodgl2jm -- /bin/sh -x -c nc -zv -t -w 2 10.96.141.175 80' Mar 8 23:48:29.499: INFO: stderr: "I0308 23:48:29.452768 801 log.go:172] (0xc0000e8370) (0xc00063a000) Create stream\nI0308 23:48:29.452811 801 log.go:172] (0xc0000e8370) (0xc00063a000) Stream added, broadcasting: 1\nI0308 23:48:29.454675 801 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0308 23:48:29.454705 801 log.go:172] (0xc0000e8370) (0xc000663b80) Create stream\nI0308 23:48:29.454715 801 log.go:172] (0xc0000e8370) (0xc000663b80) Stream added, broadcasting: 3\nI0308 23:48:29.455412 801 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0308 23:48:29.455434 801 log.go:172] (0xc0000e8370) (0xc000663d60) Create stream\nI0308 23:48:29.455440 801 log.go:172] (0xc0000e8370) (0xc000663d60) Stream added, broadcasting: 5\nI0308 23:48:29.456066 801 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0308 23:48:29.495187 801 log.go:172] (0xc0000e8370) Data frame received for 3\nI0308 23:48:29.495204 801 log.go:172] (0xc000663b80) (3) Data frame handling\nI0308 23:48:29.495430 801 log.go:172] (0xc0000e8370) Data frame received for 5\nI0308 23:48:29.495442 801 log.go:172] (0xc000663d60) (5) Data frame handling\nI0308 23:48:29.495454 801 log.go:172] (0xc000663d60) (5) Data frame sent\nI0308 23:48:29.495459 801 log.go:172] (0xc0000e8370) Data frame received for 5\nI0308 23:48:29.495465 801 log.go:172] (0xc000663d60) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.141.175 80\nConnection to 10.96.141.175 80 port [tcp/http] succeeded!\nI0308 23:48:29.496592 801 log.go:172] (0xc0000e8370) Data frame received for 1\nI0308 23:48:29.496610 801 log.go:172] (0xc00063a000) (1) Data frame handling\nI0308 23:48:29.496620 801 log.go:172] (0xc00063a000) (1) Data frame sent\nI0308 23:48:29.496638 801 log.go:172] (0xc0000e8370) (0xc00063a000) Stream removed, broadcasting: 1\nI0308 23:48:29.496658 801 log.go:172] (0xc0000e8370) Go away received\nI0308 23:48:29.496892 801 log.go:172] (0xc0000e8370) (0xc00063a000) Stream removed, broadcasting: 1\nI0308 23:48:29.496906 801 log.go:172] (0xc0000e8370) (0xc000663b80) Stream removed, broadcasting: 3\nI0308 23:48:29.496914 801 log.go:172] (0xc0000e8370) (0xc000663d60) Stream removed, broadcasting: 5\n" Mar 8 23:48:29.499: INFO: stdout: "" Mar 8 23:48:29.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-3904 execpodgl2jm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31370' Mar 8 23:48:29.677: INFO: stderr: "I0308 23:48:29.609972 821 log.go:172] (0xc00003ab00) (0xc000966000) Create stream\nI0308 23:48:29.610017 821 log.go:172] (0xc00003ab00) (0xc000966000) Stream added, broadcasting: 1\nI0308 23:48:29.615164 821 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0308 23:48:29.615200 821 log.go:172] (0xc00003ab00) (0xc000a1c000) Create stream\nI0308 23:48:29.615208 821 log.go:172] (0xc00003ab00) (0xc000a1c000) Stream added, broadcasting: 3\nI0308 23:48:29.616197 821 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0308 23:48:29.616221 821 log.go:172] (0xc00003ab00) (0xc0009660a0) Create stream\nI0308 23:48:29.616228 821 log.go:172] (0xc00003ab00) (0xc0009660a0) Stream added, broadcasting: 5\nI0308 23:48:29.616996 821 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0308 23:48:29.672847 821 log.go:172] (0xc00003ab00) Data frame received for 3\nI0308 23:48:29.672876 821 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0308 23:48:29.673043 821 log.go:172] (0xc00003ab00) Data frame received for 5\nI0308 23:48:29.673057 821 log.go:172] (0xc0009660a0) (5) Data frame handling\nI0308 23:48:29.673068 821 log.go:172] (0xc0009660a0) (5) Data frame sent\nI0308 23:48:29.673073 821 log.go:172] (0xc00003ab00) Data frame received for 5\nI0308 23:48:29.673078 821 log.go:172] (0xc0009660a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.16 31370\nConnection to 172.17.0.16 31370 port [tcp/31370] succeeded!\nI0308 23:48:29.674079 821 log.go:172] (0xc00003ab00) Data frame received for 1\nI0308 23:48:29.674138 821 log.go:172] (0xc000966000) (1) Data frame handling\nI0308 23:48:29.674151 821 log.go:172] (0xc000966000) (1) Data frame sent\nI0308 23:48:29.674160 821 log.go:172] (0xc00003ab00) (0xc000966000) Stream removed, broadcasting: 1\nI0308 23:48:29.674204 821 log.go:172] (0xc00003ab00) Go away received\nI0308 23:48:29.674386 821 log.go:172] (0xc00003ab00) (0xc000966000) Stream removed, broadcasting: 1\nI0308 23:48:29.674398 821 log.go:172] (0xc00003ab00) (0xc000a1c000) Stream removed, broadcasting: 3\nI0308 23:48:29.674405 821 log.go:172] (0xc00003ab00) (0xc0009660a0) Stream removed, broadcasting: 5\n" Mar 8 23:48:29.677: INFO: stdout: "" Mar 8 23:48:29.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-3904 execpodgl2jm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31370' Mar 8 23:48:29.865: INFO: stderr: "I0308 23:48:29.794413 842 log.go:172] (0xc00003ab00) (0xc000625cc0) Create stream\nI0308 23:48:29.794456 842 log.go:172] (0xc00003ab00) (0xc000625cc0) Stream added, broadcasting: 1\nI0308 23:48:29.796200 842 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0308 23:48:29.796239 842 log.go:172] (0xc00003ab00) (0xc0000c4000) Create stream\nI0308 23:48:29.796256 842 log.go:172] (0xc00003ab00) (0xc0000c4000) Stream added, broadcasting: 3\nI0308 23:48:29.796908 842 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0308 23:48:29.796934 842 log.go:172] (0xc00003ab00) (0xc00015c000) Create stream\nI0308 23:48:29.796945 842 log.go:172] (0xc00003ab00) (0xc00015c000) Stream added, broadcasting: 5\nI0308 23:48:29.797532 842 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0308 23:48:29.860240 842 log.go:172] (0xc00003ab00) Data frame received for 3\nI0308 23:48:29.860280 842 log.go:172] (0xc0000c4000) (3) Data frame handling\nI0308 23:48:29.860445 842 log.go:172] (0xc00003ab00) Data frame received for 5\nI0308 23:48:29.860474 842 log.go:172] (0xc00015c000) (5) Data frame handling\nI0308 23:48:29.860492 842 log.go:172] (0xc00015c000) (5) Data frame sent\nI0308 23:48:29.860499 842 log.go:172] (0xc00003ab00) Data frame received for 5\nI0308 23:48:29.860511 842 log.go:172] (0xc00015c000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31370\nConnection to 172.17.0.18 31370 port [tcp/31370] succeeded!\nI0308 23:48:29.861462 842 log.go:172] (0xc00003ab00) Data frame received for 1\nI0308 23:48:29.861480 842 log.go:172] (0xc000625cc0) (1) Data frame handling\nI0308 23:48:29.861491 842 log.go:172] (0xc000625cc0) (1) Data frame sent\nI0308 23:48:29.861499 842 log.go:172] (0xc00003ab00) (0xc000625cc0) Stream removed, broadcasting: 1\nI0308 23:48:29.861508 842 log.go:172] (0xc00003ab00) Go away received\nI0308 23:48:29.861873 842 log.go:172] (0xc00003ab00) (0xc000625cc0) Stream removed, broadcasting: 1\nI0308 23:48:29.861898 842 log.go:172] (0xc00003ab00) (0xc0000c4000) Stream removed, broadcasting: 3\nI0308 23:48:29.861914 842 log.go:172] (0xc00003ab00) (0xc00015c000) Stream removed, broadcasting: 5\n" Mar 8 23:48:29.865: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:48:29.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3904" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:7.037 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":65,"skipped":1015,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:48:29.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 23:48:29.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7142' Mar 8 23:48:30.030: INFO: stderr: "" Mar 8 23:48:30.030: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Mar 8 23:48:30.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7142' Mar 8 23:48:42.087: INFO: stderr: "" Mar 8 23:48:42.087: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:48:42.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7142" for this suite. • [SLOW TEST:12.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":66,"skipped":1030,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:48:42.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:48:42.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5" in namespace "downward-api-8786" to be "success or failure" Mar 8 23:48:42.197: INFO: Pod "downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5": Phase="Pending", Reason="", readiness=false. Elapsed: 45.811563ms Mar 8 23:48:44.202: INFO: Pod "downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049948331s STEP: Saw pod success Mar 8 23:48:44.202: INFO: Pod "downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5" satisfied condition "success or failure" Mar 8 23:48:44.205: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5 container client-container: STEP: delete the pod Mar 8 23:48:44.227: INFO: Waiting for pod downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5 to disappear Mar 8 23:48:44.231: INFO: Pod downwardapi-volume-3e4ea8fd-80ea-41bf-8cbd-9b8b294869d5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:48:44.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8786" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":67,"skipped":1049,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:48:44.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:48:44.324: INFO: Creating deployment "webserver-deployment" Mar 8 23:48:44.328: INFO: Waiting for observed generation 1 Mar 8 23:48:46.347: INFO: Waiting for all required pods to come up Mar 8 23:48:46.351: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 23:48:48.361: INFO: Waiting for deployment "webserver-deployment" to complete Mar 8 23:48:48.365: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 8 23:48:48.370: INFO: Updating deployment webserver-deployment Mar 8 23:48:48.370: INFO: Waiting for observed generation 2 Mar 8 23:48:51.007: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 23:48:51.056: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 23:48:51.061: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 23:48:51.169: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 23:48:51.169: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 23:48:51.172: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 23:48:51.176: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 8 23:48:51.176: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 8 23:48:51.182: INFO: Updating deployment webserver-deployment Mar 8 23:48:51.182: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 8 23:48:51.356: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 23:48:54.290: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 23:48:54.504: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3003 /apis/apps/v1/namespaces/deployment-3003/deployments/webserver-deployment a0c3d1ee-46f9-4434-bf4f-117cb678f912 133018 3 2020-03-08 23:48:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036e9408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 23:48:51 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-08 23:48:51 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 8 23:48:54.512: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3003 /apis/apps/v1/namespaces/deployment-3003/replicasets/webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 133012 3 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a0c3d1ee-46f9-4434-bf4f-117cb678f912 0xc0036e99f7 0xc0036e99f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036e9a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:48:54.512: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 8 23:48:54.512: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3003 /apis/apps/v1/namespaces/deployment-3003/replicasets/webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 132993 3 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a0c3d1ee-46f9-4434-bf4f-117cb678f912 0xc0036e9907 0xc0036e9908}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036e9988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:48:54.521: INFO: Pod "webserver-deployment-595b5b9587-2nkc7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2nkc7 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-2nkc7 41ee038f-dcc6-4711-91bb-5d5c9eee1dd7 133064 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003622e17 0xc003622e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.521: INFO: Pod "webserver-deployment-595b5b9587-5rknp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5rknp webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-5rknp 84136838-6d49-47ce-8607-b2b3be0cbb9a 133029 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003622f87 0xc003622f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.521: INFO: Pod "webserver-deployment-595b5b9587-5vh4d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5vh4d webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-5vh4d 4f9a84eb-a49c-4501-96c1-482512e83678 133065 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623117 0xc003623118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.521: INFO: Pod "webserver-deployment-595b5b9587-6bngz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6bngz webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-6bngz 86d14bf2-505d-4ab9-b0b7-de3ed2a6b4b3 133009 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc0036232a7 0xc0036232a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.522: INFO: Pod "webserver-deployment-595b5b9587-8p7fm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8p7fm webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-8p7fm 9e255c8e-28c0-4b9f-8366-5f91f4176fa1 132872 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623417 0xc003623418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.49,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4cbc3c747f5e4c300eecc5937ef43e0caaec5503788b460857fe5376b270b938,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.522: INFO: Pod "webserver-deployment-595b5b9587-9b47w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9b47w webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-9b47w da47076a-da4a-4cbc-bd3a-3beb28cbbfb7 132844 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623597 0xc003623598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.18,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0e02e3313d2127106a49b0f9253c91a2780c2fc16f2829bce35eb55e2107edc2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.522: INFO: Pod "webserver-deployment-595b5b9587-cxrq4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cxrq4 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-cxrq4 0a423de0-0cd7-4a7b-8c4b-bc25a9dbe011 133000 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623717 0xc003623718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.522: INFO: Pod "webserver-deployment-595b5b9587-hg9rc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hg9rc webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-hg9rc 830d9885-a75c-423f-8e8b-126baf948fef 133069 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623877 0xc003623878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.523: INFO: Pod "webserver-deployment-595b5b9587-hlhs7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hlhs7 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-hlhs7 2bce08ca-c4ea-4f85-9f57-bb7bc33ab27c 133010 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623a07 0xc003623a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.523: INFO: Pod "webserver-deployment-595b5b9587-jcrf2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jcrf2 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-jcrf2 7a2c11dd-4d6c-4956-8c44-f5b5e2ee1ca9 132835 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623b77 0xc003623b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.15,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://37ebb88e7b09141911124dde0993e18244eefd6538d067f64d15868b2fb56c24,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.523: INFO: Pod "webserver-deployment-595b5b9587-jcsz5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jcsz5 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-jcsz5 3143dfdc-7b11-41fd-b167-5b9389e0ee7e 133015 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623d07 0xc003623d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.524: INFO: Pod "webserver-deployment-595b5b9587-kkhp8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kkhp8 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-kkhp8 6d5bcf46-cb60-496a-bbb1-db9a549e048e 132866 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003623eb7 0xc003623eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.52,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a2b22837319f212ce296284761e12862e982e26c0a21f72a792b730552b4196,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.525: INFO: Pod "webserver-deployment-595b5b9587-knzgv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-knzgv webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-knzgv 292e1f2c-f73c-4a09-a455-e77209d32dca 133070 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003740047 0xc003740048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.525: INFO: Pod "webserver-deployment-595b5b9587-ktnfw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ktnfw webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-ktnfw 1388c803-2a6c-4f52-bb69-e7b5f10aa33f 132840 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc0037401d7 0xc0037401d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.17,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bb2f00a2a35c07470b6ad8bf9b51371fa83f91d2acc31aacaaf2ad38a89f770f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.526: INFO: Pod "webserver-deployment-595b5b9587-pzg56" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pzg56 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-pzg56 f408a3cb-4308-4e37-8b7a-08c329897df7 133054 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc0037403a7 0xc0037403a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.526: INFO: Pod "webserver-deployment-595b5b9587-qnf8f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qnf8f webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-qnf8f 2651eb89-b103-4023-b26e-ac4a79265936 133032 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003740517 0xc003740518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.527: INFO: Pod "webserver-deployment-595b5b9587-r868p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r868p webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-r868p c04cf7e6-9ca6-4e49-ba8d-7acce8f8d634 132999 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003740677 0xc003740678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.527: INFO: Pod "webserver-deployment-595b5b9587-s86k2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s86k2 webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-s86k2 46c6fd92-be13-4283-9cef-1a9f3a2de0b7 132869 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003740837 0xc003740838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.50,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://deec954a698508441d6beb78ed377d54bedff57e56d7488d79701418d5cde549,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.528: INFO: Pod "webserver-deployment-595b5b9587-wdtcx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wdtcx webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-wdtcx d8e0f1d6-e9ea-4db0-9970-07e3b36b83dd 132830 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc0037409f7 0xc0037409f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.16,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3e947ab67f5b5d3073c29d45a3fb17f617b8edb07f3b92459df1ca1aa9c24e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.528: INFO: Pod "webserver-deployment-595b5b9587-z62zd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z62zd webserver-deployment-595b5b9587- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-595b5b9587-z62zd e7cbc6b3-3f41-4c52-b3c7-e28aae43cf92 132863 0 2020-03-08 23:48:44 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 0fe95ae3-eff0-49bd-a89e-3a0d02960f2d 0xc003740bb7 0xc003740bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.51,StartTime:2020-03-08 23:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:48:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://abcd818bddda39fd665782c2557d7a24695c30403e8d51189cd70e14fcb72243,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.530: INFO: Pod "webserver-deployment-c7997dcc8-4hzjm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4hzjm webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-4hzjm f4de1017-6deb-4b13-b867-cd623e498035 133039 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003740d47 0xc003740d48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.531: INFO: Pod "webserver-deployment-c7997dcc8-4lgm6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4lgm6 webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-4lgm6 759b883c-b88f-4d7c-8710-1a572fd4797e 133041 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003740ec0 0xc003740ec1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.532: INFO: Pod "webserver-deployment-c7997dcc8-52shw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-52shw webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-52shw cad44f25-eaad-4e94-9b87-81664f8ebe2e 132927 0 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741080 0xc003741081}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.532: INFO: Pod "webserver-deployment-c7997dcc8-5xhfn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5xhfn webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-5xhfn 62242a43-3254-4eee-b79a-a6f857977f7d 132916 0 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741220 0xc003741221}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.532: INFO: Pod "webserver-deployment-c7997dcc8-b54sm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b54sm webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-b54sm 2cd3edf2-4059-41ec-8ef1-33a51765c89b 132930 0 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc0037413e0 0xc0037413e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.533: INFO: Pod "webserver-deployment-c7997dcc8-chhr7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-chhr7 webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-chhr7 43fbf8b4-cfc6-4700-8153-2526dd289eb1 133084 0 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741580 0xc003741581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.20,StartTime:2020-03-08 23:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.533: INFO: Pod "webserver-deployment-c7997dcc8-ddrfl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ddrfl webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-ddrfl bd496667-c611-4904-84b0-1ba0a189cf3d 133081 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741780 0xc003741781}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.533: INFO: Pod "webserver-deployment-c7997dcc8-fwg8n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fwg8n webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-fwg8n 8004d6ec-45bb-40e4-9d89-c3a12b0d5e49 133078 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741900 0xc003741901}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.533: INFO: Pod "webserver-deployment-c7997dcc8-rfkkc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rfkkc webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-rfkkc 31361677-42e1-4827-b527-99c230eba87c 133035 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741b00 0xc003741b01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.534: INFO: Pod "webserver-deployment-c7997dcc8-rm9wb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rm9wb webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-rm9wb 243976e4-fe30-4e7e-99b6-7a8f8ee2fb5c 133021 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741cd0 0xc003741cd1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.534: INFO: Pod "webserver-deployment-c7997dcc8-rr4rs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rr4rs webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-rr4rs 7a031cfd-5860-4195-a316-e5185f58736d 133023 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc003741eb0 0xc003741eb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.534: INFO: Pod "webserver-deployment-c7997dcc8-vg4bd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vg4bd webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-vg4bd e3716abd-506b-4bc0-9168-d5a5581b9f5e 132906 0 2020-03-08 23:48:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc00376c0a0 0xc00376c0a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 23:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:48:54.535: INFO: Pod "webserver-deployment-c7997dcc8-zsl7q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zsl7q webserver-deployment-c7997dcc8- deployment-3003 /api/v1/namespaces/deployment-3003/pods/webserver-deployment-c7997dcc8-zsl7q 7b6ef6cb-6a12-4c1a-b9cf-98786c49b95f 133053 0 2020-03-08 23:48:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa98bc2c-4fab-4140-8e09-10bede88998e 0xc00376c270 0xc00376c271}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x9wwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x9wwn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x9wwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:48:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 23:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:48:54.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3003" for this suite. • [SLOW TEST:10.333 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":68,"skipped":1067,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:48:54.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9544 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9544 STEP: Creating statefulset with conflicting port in namespace statefulset-9544 STEP: Waiting until pod test-pod will start running in namespace statefulset-9544 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9544 Mar 8 23:49:01.513: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: eadbb93b-df20-464a-968f-e39067dcdedc, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 23:49:01.607: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: eadbb93b-df20-464a-968f-e39067dcdedc, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 23:49:01.655: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: eadbb93b-df20-464a-968f-e39067dcdedc, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 23:49:01.954: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9544 STEP: Removing pod with conflicting port in namespace statefulset-9544 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9544 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 23:49:08.312: INFO: Deleting all statefulset in ns statefulset-9544 Mar 8 23:49:08.314: INFO: Scaling statefulset ss to 0 Mar 8 23:49:18.342: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 23:49:18.345: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:49:18.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9544" for this suite. • [SLOW TEST:23.812 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":69,"skipped":1070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:49:18.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2766 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 8 23:49:18.503: INFO: Found 0 stateful pods, waiting for 3 Mar 8 23:49:28.509: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:49:28.509: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:49:28.509: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:49:28.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 23:49:28.761: INFO: stderr: "I0308 23:49:28.668546 903 log.go:172] (0xc0009fb3f0) (0xc000ac0780) Create stream\nI0308 23:49:28.668599 903 log.go:172] (0xc0009fb3f0) (0xc000ac0780) Stream added, broadcasting: 1\nI0308 23:49:28.672594 903 log.go:172] (0xc0009fb3f0) Reply frame received for 1\nI0308 23:49:28.672634 903 log.go:172] (0xc0009fb3f0) (0xc000646780) Create stream\nI0308 23:49:28.672651 903 log.go:172] (0xc0009fb3f0) (0xc000646780) Stream added, broadcasting: 3\nI0308 23:49:28.673587 903 log.go:172] (0xc0009fb3f0) Reply frame received for 3\nI0308 23:49:28.673617 903 log.go:172] (0xc0009fb3f0) (0xc000727400) Create stream\nI0308 23:49:28.673625 903 log.go:172] (0xc0009fb3f0) (0xc000727400) Stream added, broadcasting: 5\nI0308 23:49:28.674534 903 log.go:172] (0xc0009fb3f0) Reply frame received for 5\nI0308 23:49:28.730658 903 log.go:172] (0xc0009fb3f0) Data frame received for 5\nI0308 23:49:28.730688 903 log.go:172] (0xc000727400) (5) Data frame handling\nI0308 23:49:28.730709 903 log.go:172] (0xc000727400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 23:49:28.754604 903 log.go:172] (0xc0009fb3f0) Data frame received for 3\nI0308 23:49:28.754624 903 log.go:172] (0xc000646780) (3) Data frame handling\nI0308 23:49:28.754650 903 log.go:172] (0xc000646780) (3) Data frame sent\nI0308 23:49:28.755778 903 log.go:172] (0xc0009fb3f0) Data frame received for 5\nI0308 23:49:28.755812 903 log.go:172] (0xc000727400) (5) Data frame handling\nI0308 23:49:28.755839 903 log.go:172] (0xc0009fb3f0) Data frame received for 3\nI0308 23:49:28.755856 903 log.go:172] (0xc000646780) (3) Data frame handling\nI0308 23:49:28.758256 903 log.go:172] (0xc0009fb3f0) Data frame received for 1\nI0308 23:49:28.758328 903 log.go:172] (0xc000ac0780) (1) Data frame handling\nI0308 23:49:28.758384 903 log.go:172] (0xc000ac0780) (1) Data frame sent\nI0308 23:49:28.758443 903 log.go:172] (0xc0009fb3f0) (0xc000ac0780) Stream removed, broadcasting: 1\nI0308 23:49:28.758477 903 log.go:172] (0xc0009fb3f0) Go away received\nI0308 23:49:28.758954 903 log.go:172] (0xc0009fb3f0) (0xc000ac0780) Stream removed, broadcasting: 1\nI0308 23:49:28.758986 903 log.go:172] (0xc0009fb3f0) (0xc000646780) Stream removed, broadcasting: 3\nI0308 23:49:28.759001 903 log.go:172] (0xc0009fb3f0) (0xc000727400) Stream removed, broadcasting: 5\n" Mar 8 23:49:28.762: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 23:49:28.762: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 23:49:38.795: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 23:49:48.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 23:49:49.047: INFO: stderr: "I0308 23:49:48.975077 922 log.go:172] (0xc000a8b600) (0xc000a768c0) Create stream\nI0308 23:49:48.975128 922 log.go:172] (0xc000a8b600) (0xc000a768c0) Stream added, broadcasting: 1\nI0308 23:49:48.979817 922 log.go:172] (0xc000a8b600) Reply frame received for 1\nI0308 23:49:48.979858 922 log.go:172] (0xc000a8b600) (0xc0005c2780) Create stream\nI0308 23:49:48.979869 922 log.go:172] (0xc000a8b600) (0xc0005c2780) Stream added, broadcasting: 3\nI0308 23:49:48.980979 922 log.go:172] (0xc000a8b600) Reply frame received for 3\nI0308 23:49:48.981009 922 log.go:172] (0xc000a8b600) (0xc000793400) Create stream\nI0308 23:49:48.981018 922 log.go:172] (0xc000a8b600) (0xc000793400) Stream added, broadcasting: 5\nI0308 23:49:48.981884 922 log.go:172] (0xc000a8b600) Reply frame received for 5\nI0308 23:49:49.041583 922 log.go:172] (0xc000a8b600) Data frame received for 3\nI0308 23:49:49.041608 922 log.go:172] (0xc0005c2780) (3) Data frame handling\nI0308 23:49:49.041630 922 log.go:172] (0xc0005c2780) (3) Data frame sent\nI0308 23:49:49.041640 922 log.go:172] (0xc000a8b600) Data frame received for 3\nI0308 23:49:49.041647 922 log.go:172] (0xc0005c2780) (3) Data frame handling\nI0308 23:49:49.041836 922 log.go:172] (0xc000a8b600) Data frame received for 5\nI0308 23:49:49.041859 922 log.go:172] (0xc000793400) (5) Data frame handling\nI0308 23:49:49.041874 922 log.go:172] (0xc000793400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 23:49:49.041888 922 log.go:172] (0xc000a8b600) Data frame received for 5\nI0308 23:49:49.041931 922 log.go:172] (0xc000793400) (5) Data frame handling\nI0308 23:49:49.042916 922 log.go:172] (0xc000a8b600) Data frame received for 1\nI0308 23:49:49.042947 922 log.go:172] (0xc000a768c0) (1) Data frame handling\nI0308 23:49:49.042960 922 log.go:172] (0xc000a768c0) (1) Data frame sent\nI0308 23:49:49.042983 922 log.go:172] (0xc000a8b600) (0xc000a768c0) Stream removed, broadcasting: 1\nI0308 23:49:49.043019 922 log.go:172] (0xc000a8b600) Go away received\nI0308 23:49:49.044095 922 log.go:172] (0xc000a8b600) (0xc000a768c0) Stream removed, broadcasting: 1\nI0308 23:49:49.044131 922 log.go:172] (0xc000a8b600) (0xc0005c2780) Stream removed, broadcasting: 3\nI0308 23:49:49.044144 922 log.go:172] (0xc000a8b600) (0xc000793400) Stream removed, broadcasting: 5\n" Mar 8 23:49:49.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 23:49:49.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 23:50:09.068: INFO: Waiting for StatefulSet statefulset-2766/ss2 to complete update Mar 8 23:50:09.068: INFO: Waiting for Pod statefulset-2766/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 8 23:50:19.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 23:50:19.323: INFO: stderr: "I0308 23:50:19.222520 943 log.go:172] (0xc0009b7290) (0xc000a1e460) Create stream\nI0308 23:50:19.222579 943 log.go:172] (0xc0009b7290) (0xc000a1e460) Stream added, broadcasting: 1\nI0308 23:50:19.226958 943 log.go:172] (0xc0009b7290) Reply frame received for 1\nI0308 23:50:19.226994 943 log.go:172] (0xc0009b7290) (0xc00061e780) Create stream\nI0308 23:50:19.227005 943 log.go:172] (0xc0009b7290) (0xc00061e780) Stream added, broadcasting: 3\nI0308 23:50:19.227927 943 log.go:172] (0xc0009b7290) Reply frame received for 3\nI0308 23:50:19.227957 943 log.go:172] (0xc0009b7290) (0xc00052d400) Create stream\nI0308 23:50:19.227972 943 log.go:172] (0xc0009b7290) (0xc00052d400) Stream added, broadcasting: 5\nI0308 23:50:19.228812 943 log.go:172] (0xc0009b7290) Reply frame received for 5\nI0308 23:50:19.300448 943 log.go:172] (0xc0009b7290) Data frame received for 5\nI0308 23:50:19.300470 943 log.go:172] (0xc00052d400) (5) Data frame handling\nI0308 23:50:19.300483 943 log.go:172] (0xc00052d400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 23:50:19.317978 943 log.go:172] (0xc0009b7290) Data frame received for 5\nI0308 23:50:19.318014 943 log.go:172] (0xc00052d400) (5) Data frame handling\nI0308 23:50:19.318032 943 log.go:172] (0xc0009b7290) Data frame received for 3\nI0308 23:50:19.318038 943 log.go:172] (0xc00061e780) (3) Data frame handling\nI0308 23:50:19.318045 943 log.go:172] (0xc00061e780) (3) Data frame sent\nI0308 23:50:19.318055 943 log.go:172] (0xc0009b7290) Data frame received for 3\nI0308 23:50:19.318060 943 log.go:172] (0xc00061e780) (3) Data frame handling\nI0308 23:50:19.319738 943 log.go:172] (0xc0009b7290) Data frame received for 1\nI0308 23:50:19.319755 943 log.go:172] (0xc000a1e460) (1) Data frame handling\nI0308 23:50:19.319762 943 log.go:172] (0xc000a1e460) (1) Data frame sent\nI0308 23:50:19.319771 943 log.go:172] (0xc0009b7290) (0xc000a1e460) Stream removed, broadcasting: 1\nI0308 23:50:19.320024 943 log.go:172] (0xc0009b7290) (0xc000a1e460) Stream removed, broadcasting: 1\nI0308 23:50:19.320042 943 log.go:172] (0xc0009b7290) Go away received\nI0308 23:50:19.320067 943 log.go:172] (0xc0009b7290) (0xc00061e780) Stream removed, broadcasting: 3\nI0308 23:50:19.320082 943 log.go:172] (0xc0009b7290) (0xc00052d400) Stream removed, broadcasting: 5\n" Mar 8 23:50:19.323: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 23:50:19.323: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 23:50:29.354: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 23:50:39.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 23:50:39.650: INFO: stderr: "I0308 23:50:39.573364 964 log.go:172] (0xc0009220b0) (0xc0009a4140) Create stream\nI0308 23:50:39.573409 964 log.go:172] (0xc0009220b0) (0xc0009a4140) Stream added, broadcasting: 1\nI0308 23:50:39.579940 964 log.go:172] (0xc0009220b0) Reply frame received for 1\nI0308 23:50:39.579989 964 log.go:172] (0xc0009220b0) (0xc000235360) Create stream\nI0308 23:50:39.580001 964 log.go:172] (0xc0009220b0) (0xc000235360) Stream added, broadcasting: 3\nI0308 23:50:39.581230 964 log.go:172] (0xc0009220b0) Reply frame received for 3\nI0308 23:50:39.581262 964 log.go:172] (0xc0009220b0) (0xc000235400) Create stream\nI0308 23:50:39.581277 964 log.go:172] (0xc0009220b0) (0xc000235400) Stream added, broadcasting: 5\nI0308 23:50:39.586086 964 log.go:172] (0xc0009220b0) Reply frame received for 5\nI0308 23:50:39.644925 964 log.go:172] (0xc0009220b0) Data frame received for 5\nI0308 23:50:39.644964 964 log.go:172] (0xc000235400) (5) Data frame handling\nI0308 23:50:39.644977 964 log.go:172] (0xc000235400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 23:50:39.644993 964 log.go:172] (0xc0009220b0) Data frame received for 3\nI0308 23:50:39.645001 964 log.go:172] (0xc000235360) (3) Data frame handling\nI0308 23:50:39.645011 964 log.go:172] (0xc000235360) (3) Data frame sent\nI0308 23:50:39.645023 964 log.go:172] (0xc0009220b0) Data frame received for 3\nI0308 23:50:39.645041 964 log.go:172] (0xc000235360) (3) Data frame handling\nI0308 23:50:39.645063 964 log.go:172] (0xc0009220b0) Data frame received for 5\nI0308 23:50:39.645076 964 log.go:172] (0xc000235400) (5) Data frame handling\nI0308 23:50:39.646469 964 log.go:172] (0xc0009220b0) Data frame received for 1\nI0308 23:50:39.646489 964 log.go:172] (0xc0009a4140) (1) Data frame handling\nI0308 23:50:39.646501 964 log.go:172] (0xc0009a4140) (1) Data frame sent\nI0308 23:50:39.646540 964 log.go:172] (0xc0009220b0) (0xc0009a4140) Stream removed, broadcasting: 1\nI0308 23:50:39.646559 964 log.go:172] (0xc0009220b0) Go away received\nI0308 23:50:39.646896 964 log.go:172] (0xc0009220b0) (0xc0009a4140) Stream removed, broadcasting: 1\nI0308 23:50:39.646914 964 log.go:172] (0xc0009220b0) (0xc000235360) Stream removed, broadcasting: 3\nI0308 23:50:39.646923 964 log.go:172] (0xc0009220b0) (0xc000235400) Stream removed, broadcasting: 5\n" Mar 8 23:50:39.650: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 23:50:39.650: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 23:50:49.671: INFO: Waiting for StatefulSet statefulset-2766/ss2 to complete update Mar 8 23:50:49.671: INFO: Waiting for Pod statefulset-2766/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 8 23:50:49.671: INFO: Waiting for Pod statefulset-2766/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 8 23:50:49.671: INFO: Waiting for Pod statefulset-2766/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 8 23:50:59.678: INFO: Waiting for StatefulSet statefulset-2766/ss2 to complete update Mar 8 23:50:59.678: INFO: Waiting for Pod statefulset-2766/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 8 23:51:09.679: INFO: Waiting for StatefulSet statefulset-2766/ss2 to complete update Mar 8 23:51:09.679: INFO: Waiting for Pod statefulset-2766/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 23:51:19.688: INFO: Deleting all statefulset in ns statefulset-2766 Mar 8 23:51:19.691: INFO: Scaling statefulset ss2 to 0 Mar 8 23:51:49.710: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 23:51:49.713: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:51:49.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2766" for this suite. • [SLOW TEST:151.379 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":70,"skipped":1116,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:51:49.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 8 23:51:49.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1397' Mar 8 23:51:50.150: INFO: stderr: "" Mar 8 23:51:50.150: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 23:51:51.155: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:51:51.155: INFO: Found 0 / 1 Mar 8 23:51:52.154: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:51:52.155: INFO: Found 1 / 1 Mar 8 23:51:52.155: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 23:51:52.158: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:51:52.158: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 23:51:52.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-7mxsh --namespace=kubectl-1397 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 23:51:52.284: INFO: stderr: "" Mar 8 23:51:52.284: INFO: stdout: "pod/agnhost-master-7mxsh patched\n" STEP: checking annotations Mar 8 23:51:52.326: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:51:52.326: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:51:52.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1397" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":280,"completed":71,"skipped":1117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:51:52.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:51:52.825: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:51:55.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:51:55.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8217-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:51:57.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9959" for this suite. STEP: Destroying namespace "webhook-9959-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":72,"skipped":1143,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:51:57.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3467 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 8 23:51:57.238: INFO: Found 0 stateful pods, waiting for 3 Mar 8 23:52:07.243: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:52:07.243: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:52:07.243: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 23:52:07.269: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 23:52:17.323: INFO: Updating stateful set ss2 Mar 8 23:52:17.335: INFO: Waiting for Pod statefulset-3467/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 8 23:52:27.446: INFO: Found 2 stateful pods, waiting for 3 Mar 8 23:52:37.451: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:52:37.451: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 23:52:37.451: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 23:52:37.475: INFO: Updating stateful set ss2 Mar 8 23:52:37.498: INFO: Waiting for Pod statefulset-3467/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 23:52:47.523: INFO: Updating stateful set ss2 Mar 8 23:52:47.547: INFO: Waiting for StatefulSet statefulset-3467/ss2 to complete update Mar 8 23:52:47.547: INFO: Waiting for Pod statefulset-3467/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 23:52:57.555: INFO: Deleting all statefulset in ns statefulset-3467 Mar 8 23:52:57.559: INFO: Scaling statefulset ss2 to 0 Mar 8 23:53:07.583: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 23:53:07.586: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:53:07.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3467" for this suite. • [SLOW TEST:70.460 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":73,"skipped":1145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:53:07.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-ndxc STEP: Creating a pod to test atomic-volume-subpath Mar 8 23:53:07.710: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ndxc" in namespace "subpath-7717" to be "success or failure" Mar 8 23:53:07.731: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.757317ms Mar 8 23:53:09.741: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 2.030261669s Mar 8 23:53:11.746: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 4.03568975s Mar 8 23:53:13.750: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 6.039889433s Mar 8 23:53:15.755: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 8.044365535s Mar 8 23:53:17.759: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 10.048871461s Mar 8 23:53:19.763: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 12.052731814s Mar 8 23:53:21.769: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 14.058973607s Mar 8 23:53:23.774: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 16.063619396s Mar 8 23:53:25.778: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 18.068148278s Mar 8 23:53:27.782: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Running", Reason="", readiness=true. Elapsed: 20.072009552s Mar 8 23:53:29.786: INFO: Pod "pod-subpath-test-projected-ndxc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.075701367s STEP: Saw pod success Mar 8 23:53:29.786: INFO: Pod "pod-subpath-test-projected-ndxc" satisfied condition "success or failure" Mar 8 23:53:29.789: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-ndxc container test-container-subpath-projected-ndxc: STEP: delete the pod Mar 8 23:53:29.863: INFO: Waiting for pod pod-subpath-test-projected-ndxc to disappear Mar 8 23:53:29.869: INFO: Pod pod-subpath-test-projected-ndxc no longer exists STEP: Deleting pod pod-subpath-test-projected-ndxc Mar 8 23:53:29.869: INFO: Deleting pod "pod-subpath-test-projected-ndxc" in namespace "subpath-7717" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:53:29.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7717" for this suite. • [SLOW TEST:22.265 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":74,"skipped":1210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:53:29.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 23:53:29.950: INFO: Waiting up to 5m0s for pod "pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02" in namespace "emptydir-934" to be "success or failure" Mar 8 23:53:29.953: INFO: Pod "pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017275ms Mar 8 23:53:31.957: INFO: Pod "pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007294927s STEP: Saw pod success Mar 8 23:53:31.957: INFO: Pod "pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02" satisfied condition "success or failure" Mar 8 23:53:31.961: INFO: Trying to get logs from node latest-worker pod pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02 container test-container: STEP: delete the pod Mar 8 23:53:31.992: INFO: Waiting for pod pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02 to disappear Mar 8 23:53:32.002: INFO: Pod pod-8ab85ae3-9322-40b5-af2b-4505da6d5f02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:53:32.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-934" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":75,"skipped":1251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:53:32.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-ctfr STEP: Creating a pod to test atomic-volume-subpath Mar 8 23:53:32.123: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ctfr" in namespace "subpath-3855" to be "success or failure" Mar 8 23:53:32.127: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056743ms Mar 8 23:53:34.131: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 2.008264865s Mar 8 23:53:36.135: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 4.012202009s Mar 8 23:53:38.139: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 6.016179424s Mar 8 23:53:40.143: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 8.020054152s Mar 8 23:53:42.147: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 10.023924556s Mar 8 23:53:44.150: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 12.02773503s Mar 8 23:53:46.154: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 14.031697415s Mar 8 23:53:48.158: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 16.035052629s Mar 8 23:53:50.162: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 18.03902875s Mar 8 23:53:52.166: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Running", Reason="", readiness=true. Elapsed: 20.043267982s Mar 8 23:53:54.170: INFO: Pod "pod-subpath-test-secret-ctfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047381501s STEP: Saw pod success Mar 8 23:53:54.170: INFO: Pod "pod-subpath-test-secret-ctfr" satisfied condition "success or failure" Mar 8 23:53:54.173: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-ctfr container test-container-subpath-secret-ctfr: STEP: delete the pod Mar 8 23:53:54.191: INFO: Waiting for pod pod-subpath-test-secret-ctfr to disappear Mar 8 23:53:54.213: INFO: Pod pod-subpath-test-secret-ctfr no longer exists STEP: Deleting pod pod-subpath-test-secret-ctfr Mar 8 23:53:54.213: INFO: Deleting pod "pod-subpath-test-secret-ctfr" in namespace "subpath-3855" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:53:54.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3855" for this suite. • [SLOW TEST:22.216 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":76,"skipped":1262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:53:54.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:53:54.296: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:53:56.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5525" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:53:56.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 8 23:53:56.393: INFO: PodSpec: initContainers in spec.initContainers Mar 8 23:54:40.782: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ffdcfccf-f9a7-4978-b0b4-6e310c7d7b1e", GenerateName:"", Namespace:"init-container-383", SelfLink:"/api/v1/namespaces/init-container-383/pods/pod-init-ffdcfccf-f9a7-4978-b0b4-6e310c7d7b1e", UID:"e73f5bf8-e308-4121-92c1-8d181f4d43db", ResourceVersion:"135312", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719308436, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"393661288"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4w4rk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003f36200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4w4rk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4w4rk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4w4rk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0035d4d78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00296ff80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035d4e00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035d4e20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0035d4e28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0035d4e2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308436, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308436, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308436, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308436, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.50", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.50"}}, StartTime:(*v1.Time)(0xc002287860), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00293cb60)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00293cbd0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://eba63e2470bcf820ba03f66a045704384d7a9493cc0ae0bac2db49e72bf1f92d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022878e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002287880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0035d4eaf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:54:40.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-383" for this suite. • [SLOW TEST:44.505 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":78,"skipped":1321,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:54:40.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Mar 8 23:54:41.420: INFO: created pod pod-service-account-defaultsa Mar 8 23:54:41.420: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 23:54:41.429: INFO: created pod pod-service-account-mountsa Mar 8 23:54:41.429: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 23:54:41.434: INFO: created pod pod-service-account-nomountsa Mar 8 23:54:41.434: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 23:54:41.463: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 23:54:41.463: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 23:54:41.471: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 23:54:41.471: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 23:54:41.526: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 23:54:41.526: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 23:54:41.537: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 23:54:41.537: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 23:54:41.572: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 23:54:41.572: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 23:54:41.620: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 23:54:41.620: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:54:41.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8517" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":79,"skipped":1333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:54:41.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:54:41.961: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 23:54:46.964: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 23:54:46.964: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 23:54:47.004: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3312 /apis/apps/v1/namespaces/deployment-3312/deployments/test-cleanup-deployment 406404c8-897c-4111-b296-64d4a7cf079c 135428 1 2020-03-08 23:54:46 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003436c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 8 23:54:47.084: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3312 /apis/apps/v1/namespaces/deployment-3312/replicasets/test-cleanup-deployment-55ffc6b7b6 379b15ad-5b7a-4153-a073-30295e212882 135435 1 2020-03-08 23:54:46 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 406404c8-897c-4111-b296-64d4a7cf079c 0xc003437087 0xc003437088}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034370f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:54:47.084: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 8 23:54:47.084: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3312 /apis/apps/v1/namespaces/deployment-3312/replicasets/test-cleanup-controller ee8f8ad4-f3ea-4f55-8cdc-04fbcbc79d2b 135429 1 2020-03-08 23:54:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 406404c8-897c-4111-b296-64d4a7cf079c 0xc003436fb7 0xc003436fb8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003437018 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:54:47.089: INFO: Pod "test-cleanup-controller-tn2qf" is available: &Pod{ObjectMeta:{test-cleanup-controller-tn2qf test-cleanup-controller- deployment-3312 /api/v1/namespaces/deployment-3312/pods/test-cleanup-controller-tn2qf 32dcd462-4a98-4907-adfd-6ddf725b1335 135400 0 2020-03-08 23:54:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ee8f8ad4-f3ea-4f55-8cdc-04fbcbc79d2b 0xc003437537 0xc003437538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:54:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:54:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:54:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:54:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.55,StartTime:2020-03-08 23:54:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:54:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://50b908488167b15c6420191c51a02ce09283bcaeaf2a9a9165f65abf1d6c20e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 23:54:47.089: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-f2rdk" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-f2rdk test-cleanup-deployment-55ffc6b7b6- deployment-3312 /api/v1/namespaces/deployment-3312/pods/test-cleanup-deployment-55ffc6b7b6-f2rdk 0dcc62df-66c2-46cc-a5b8-84e47765af7e 135436 0 2020-03-08 23:54:46 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 379b15ad-5b7a-4153-a073-30295e212882 0xc0034376c7 0xc0034376c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:54:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:54:47.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3312" for this suite. • [SLOW TEST:5.478 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":80,"skipped":1381,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:54:47.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:54:58.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1760" for this suite. • [SLOW TEST:11.215 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":81,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:54:58.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 23:54:58.506: INFO: Waiting up to 5m0s for pod "pod-5b787df0-666a-4747-8552-51171b612086" in namespace "emptydir-1181" to be "success or failure" Mar 8 23:54:58.532: INFO: Pod "pod-5b787df0-666a-4747-8552-51171b612086": Phase="Pending", Reason="", readiness=false. Elapsed: 25.509186ms Mar 8 23:55:00.535: INFO: Pod "pod-5b787df0-666a-4747-8552-51171b612086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028872712s STEP: Saw pod success Mar 8 23:55:00.535: INFO: Pod "pod-5b787df0-666a-4747-8552-51171b612086" satisfied condition "success or failure" Mar 8 23:55:00.538: INFO: Trying to get logs from node latest-worker pod pod-5b787df0-666a-4747-8552-51171b612086 container test-container: STEP: delete the pod Mar 8 23:55:00.574: INFO: Waiting for pod pod-5b787df0-666a-4747-8552-51171b612086 to disappear Mar 8 23:55:00.578: INFO: Pod pod-5b787df0-666a-4747-8552-51171b612086 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:00.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1181" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1405,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:00.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 23:55:00.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:00.697: INFO: Number of nodes with available pods: 0 Mar 8 23:55:00.697: INFO: Node latest-worker is running more than one daemon pod Mar 8 23:55:01.701: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:01.705: INFO: Number of nodes with available pods: 0 Mar 8 23:55:01.705: INFO: Node latest-worker is running more than one daemon pod Mar 8 23:55:02.706: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:02.710: INFO: Number of nodes with available pods: 1 Mar 8 23:55:02.710: INFO: Node latest-worker is running more than one daemon pod Mar 8 23:55:03.702: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:03.705: INFO: Number of nodes with available pods: 2 Mar 8 23:55:03.705: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 23:55:03.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:03.738: INFO: Number of nodes with available pods: 1 Mar 8 23:55:03.738: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 23:55:04.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:04.757: INFO: Number of nodes with available pods: 1 Mar 8 23:55:04.757: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 23:55:05.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:05.772: INFO: Number of nodes with available pods: 1 Mar 8 23:55:05.772: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 23:55:06.742: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:06.745: INFO: Number of nodes with available pods: 1 Mar 8 23:55:06.745: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 23:55:07.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:07.747: INFO: Number of nodes with available pods: 1 Mar 8 23:55:07.747: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 23:55:08.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 23:55:08.746: INFO: Number of nodes with available pods: 2 Mar 8 23:55:08.746: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7147, will wait for the garbage collector to delete the pods Mar 8 23:55:08.814: INFO: Deleting DaemonSet.extensions daemon-set took: 12.928782ms Mar 8 23:55:09.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.227015ms Mar 8 23:55:12.617: INFO: Number of nodes with available pods: 0 Mar 8 23:55:12.617: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 23:55:12.623: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7147/daemonsets","resourceVersion":"135671"},"items":null} Mar 8 23:55:12.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7147/pods","resourceVersion":"135671"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:12.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7147" for this suite. • [SLOW TEST:12.051 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":83,"skipped":1419,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:12.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-57fa18a9-f9a2-448a-8c6e-01b6df6d92b8 STEP: Creating a pod to test consume secrets Mar 8 23:55:12.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d" in namespace "projected-89" to be "success or failure" Mar 8 23:55:12.830: INFO: Pod "pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.810565ms Mar 8 23:55:14.834: INFO: Pod "pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035861131s Mar 8 23:55:16.846: INFO: Pod "pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047943102s STEP: Saw pod success Mar 8 23:55:16.846: INFO: Pod "pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d" satisfied condition "success or failure" Mar 8 23:55:16.849: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d container projected-secret-volume-test: STEP: delete the pod Mar 8 23:55:16.907: INFO: Waiting for pod pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d to disappear Mar 8 23:55:16.914: INFO: Pod pod-projected-secrets-f152975b-4918-4121-8a0f-cf1aa960832d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:16.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-89" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":84,"skipped":1419,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:16.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 23:55:21.035: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 23:55:21.041: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 23:55:23.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 23:55:23.046: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 23:55:25.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 23:55:25.047: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:25.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9504" for this suite. • [SLOW TEST:8.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":85,"skipped":1427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:25.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 23:55:25.748: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 23:55:27.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308525, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308525, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308525, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308525, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:55:30.791: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:55:30.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:31.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9840" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.963 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":86,"skipped":1488,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:32.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Mar 8 23:55:32.092: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix399631769/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:32.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5211" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":280,"completed":87,"skipped":1508,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:32.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 8 23:55:32.256: INFO: >>> kubeConfig: /root/.kube/config Mar 8 23:55:34.819: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:43.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-149" for this suite. • [SLOW TEST:10.876 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":88,"skipped":1524,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:43.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:55:43.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557" in namespace "downward-api-2568" to be "success or failure" Mar 8 23:55:43.113: INFO: Pod "downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557": Phase="Pending", Reason="", readiness=false. Elapsed: 3.996612ms Mar 8 23:55:45.118: INFO: Pod "downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008960103s STEP: Saw pod success Mar 8 23:55:45.118: INFO: Pod "downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557" satisfied condition "success or failure" Mar 8 23:55:45.121: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557 container client-container: STEP: delete the pod Mar 8 23:55:45.149: INFO: Waiting for pod downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557 to disappear Mar 8 23:55:45.154: INFO: Pod downwardapi-volume-05ff5632-49d3-4c9f-bf2a-ed8498606557 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:45.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2568" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":89,"skipped":1531,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:45.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Mar 8 23:55:45.228: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Mar 8 23:55:45.735: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 8 23:55:47.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308545, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308545, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:55:50.422: INFO: Waited 622.442315ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:55:50.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8235" for this suite. • [SLOW TEST:5.805 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":90,"skipped":1539,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:55:50.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:56:04.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9574" for this suite. • [SLOW TEST:13.192 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":91,"skipped":1544,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:56:04.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:56:06.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7692" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1547,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:56:06.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:56:06.939: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:56:08.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:56:11.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:56:12.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-553" for this suite. STEP: Destroying namespace "webhook-553-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.962 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":93,"skipped":1556,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:56:12.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-906.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 139.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.139_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-906.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 139.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.139_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 23:56:16.399: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.401: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.404: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.406: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.422: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.423: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.425: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:16.443: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:21.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.480: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.488: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:21.505: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:26.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.480: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:26.503: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:31.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.480: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.489: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:31.507: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:36.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.456: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.481: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:36.514: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:41.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.478: INFO: Unable to read jessie_udp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.481: INFO: Unable to read jessie_tcp@dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.486: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local from pod dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9: the server could not find the requested resource (get pods dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9) Mar 8 23:56:41.502: INFO: Lookups using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 failed for: [wheezy_udp@dns-test-service.dns-906.svc.cluster.local wheezy_tcp@dns-test-service.dns-906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_udp@dns-test-service.dns-906.svc.cluster.local jessie_tcp@dns-test-service.dns-906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-906.svc.cluster.local] Mar 8 23:56:46.523: INFO: DNS probes using dns-906/dns-test-a3e11550-7dce-4210-a2ef-91d4fd16c6f9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:56:46.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-906" for this suite. • [SLOW TEST:34.482 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":280,"completed":94,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:56:46.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9539.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9539.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 23:56:50.872: INFO: DNS probes using dns-9539/dns-test-cd938e10-93ae-49cb-9778-1707068dbf1c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:56:50.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9539" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":95,"skipped":1586,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:56:50.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 23:57:01.126478 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 23:57:01.126: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:01.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5907" for this suite. • [SLOW TEST:10.193 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":96,"skipped":1596,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:01.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 23:57:01.215: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 23:57:01.225: INFO: Waiting for terminating namespaces to be deleted... Mar 8 23:57:01.228: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 23:57:01.237: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 23:57:01.237: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 23:57:01.237: INFO: simpletest-rc-to-be-deleted-lkdq4 from gc-5907 started at 2020-03-08 23:56:51 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container nginx ready: true, restart count 0 Mar 8 23:57:01.237: INFO: simpletest-rc-to-be-deleted-59bss from gc-5907 started at 2020-03-08 23:56:51 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container nginx ready: true, restart count 0 Mar 8 23:57:01.237: INFO: simpletest-rc-to-be-deleted-8ttjv from gc-5907 started at 2020-03-08 23:56:51 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container nginx ready: true, restart count 0 Mar 8 23:57:01.237: INFO: simpletest-rc-to-be-deleted-4pr6w from gc-5907 started at 2020-03-08 23:56:51 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.237: INFO: Container nginx ready: true, restart count 0 Mar 8 23:57:01.237: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 23:57:01.255: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.255: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 23:57:01.255: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.255: INFO: Container coredns ready: true, restart count 0 Mar 8 23:57:01.255: INFO: simpletest-rc-to-be-deleted-gddjz from gc-5907 started at 2020-03-08 23:56:51 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.255: INFO: Container nginx ready: true, restart count 0 Mar 8 23:57:01.255: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 23:57:01.255: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-178e6ba6-b56f-4269-8f98-1abb246f246f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-178e6ba6-b56f-4269-8f98-1abb246f246f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-178e6ba6-b56f-4269-8f98-1abb246f246f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:11.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6617" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:10.337 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":97,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:11.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-8417/configmap-test-968a225f-6b34-4a2f-ac86-0ba3ffc7341d STEP: Creating a pod to test consume configMaps Mar 8 23:57:11.537: INFO: Waiting up to 5m0s for pod "pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298" in namespace "configmap-8417" to be "success or failure" Mar 8 23:57:11.540: INFO: Pod "pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673728ms Mar 8 23:57:13.637: INFO: Pod "pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099920027s Mar 8 23:57:15.640: INFO: Pod "pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103807482s STEP: Saw pod success Mar 8 23:57:15.641: INFO: Pod "pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298" satisfied condition "success or failure" Mar 8 23:57:15.644: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298 container env-test: STEP: delete the pod Mar 8 23:57:15.764: INFO: Waiting for pod pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298 to disappear Mar 8 23:57:15.774: INFO: Pod pod-configmaps-7e2eba5d-416c-4755-baa6-02ed9892e298 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:15.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8417" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:15.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:57:15.842: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 23:57:18.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 create -f -' Mar 8 23:57:20.743: INFO: stderr: "" Mar 8 23:57:20.743: INFO: stdout: "e2e-test-crd-publish-openapi-6105-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 23:57:20.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 delete e2e-test-crd-publish-openapi-6105-crds test-cr' Mar 8 23:57:20.869: INFO: stderr: "" Mar 8 23:57:20.869: INFO: stdout: "e2e-test-crd-publish-openapi-6105-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 8 23:57:20.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 apply -f -' Mar 8 23:57:21.131: INFO: stderr: "" Mar 8 23:57:21.131: INFO: stdout: "e2e-test-crd-publish-openapi-6105-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 23:57:21.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 delete e2e-test-crd-publish-openapi-6105-crds test-cr' Mar 8 23:57:21.258: INFO: stderr: "" Mar 8 23:57:21.258: INFO: stdout: "e2e-test-crd-publish-openapi-6105-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 8 23:57:21.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6105-crds' Mar 8 23:57:21.497: INFO: stderr: "" Mar 8 23:57:21.497: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6105-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:24.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6167" for this suite. • [SLOW TEST:8.652 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":99,"skipped":1662,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:24.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 23:57:28.505: INFO: &Pod{ObjectMeta:{send-events-4b219855-a8a1-4498-a20f-3e970bdf6ea7 events-6429 /api/v1/namespaces/events-6429/pods/send-events-4b219855-a8a1-4498-a20f-3e970bdf6ea7 aad643b6-f46b-426f-a099-2f1be9e9c180 136921 0 2020-03-08 23:57:24 +0000 UTC map[name:foo time:484171822] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nqncn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nqncn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nqncn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:57:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:57:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:57:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:57:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.73,StartTime:2020-03-08 23:57:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:57:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c35ac754ae2d079481823800353d9956a16732c03f0ed91e7542c98725b8e686,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 8 23:57:30.510: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 23:57:32.514: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:32.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6429" for this suite. • [SLOW TEST:8.108 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":100,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:32.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:57:32.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7037' Mar 8 23:57:32.929: INFO: stderr: "" Mar 8 23:57:32.929: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 8 23:57:32.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7037' Mar 8 23:57:33.216: INFO: stderr: "" Mar 8 23:57:33.216: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 23:57:34.219: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:57:34.220: INFO: Found 0 / 1 Mar 8 23:57:35.220: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:57:35.220: INFO: Found 1 / 1 Mar 8 23:57:35.220: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 23:57:35.223: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 23:57:35.223: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 23:57:35.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-rccbp --namespace=kubectl-7037' Mar 8 23:57:35.343: INFO: stderr: "" Mar 8 23:57:35.343: INFO: stdout: "Name: agnhost-master-rccbp\nNamespace: kubectl-7037\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Sun, 08 Mar 2020 23:57:32 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.74\nIPs:\n IP: 10.244.1.74\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://491028888e7cc86a42757be5ce0bde7608faf184ef80d5c3e7f5ee501201a3a3\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 23:57:33 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ndrdc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ndrdc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ndrdc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7037/agnhost-master-rccbp to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Mar 8 23:57:35.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7037' Mar 8 23:57:35.458: INFO: stderr: "" Mar 8 23:57:35.458: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7037\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-rccbp\n" Mar 8 23:57:35.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7037' Mar 8 23:57:35.553: INFO: stderr: "" Mar 8 23:57:35.553: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7037\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.43.157\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.74:6379\nSession Affinity: None\nEvents: \n" Mar 8 23:57:35.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 8 23:57:35.674: INFO: stderr: "" Mar 8 23:57:35.674: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 08 Mar 2020 23:57:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 23:55:32 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 23:55:32 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 23:55:32 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 23:55:32 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 9h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 9h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 9h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 9h\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 9h\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 8 23:57:35.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-7037' Mar 8 23:57:35.759: INFO: stderr: "" Mar 8 23:57:35.759: INFO: stdout: "Name: kubectl-7037\nLabels: e2e-framework=kubectl\n e2e-run=eea2ee52-965b-4dce-bea2-244956469237\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:35.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7037" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":280,"completed":101,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:35.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:57:36.730: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:57:38.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308656, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308656, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:57:41.792: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:57:41.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2610-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:43.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-757" for this suite. STEP: Destroying namespace "webhook-757-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.310 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":102,"skipped":1720,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:43.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 23:57:43.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3284 /api/v1/namespaces/watch-3284/configmaps/e2e-watch-test-watch-closed 2fb69823-c78a-4b87-b44d-648f32d9ae4d 137090 0 2020-03-08 23:57:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:57:43.179: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3284 /api/v1/namespaces/watch-3284/configmaps/e2e-watch-test-watch-closed 2fb69823-c78a-4b87-b44d-648f32d9ae4d 137091 0 2020-03-08 23:57:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 23:57:43.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3284 /api/v1/namespaces/watch-3284/configmaps/e2e-watch-test-watch-closed 2fb69823-c78a-4b87-b44d-648f32d9ae4d 137092 0 2020-03-08 23:57:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:57:43.208: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3284 /api/v1/namespaces/watch-3284/configmaps/e2e-watch-test-watch-closed 2fb69823-c78a-4b87-b44d-648f32d9ae4d 137093 0 2020-03-08 23:57:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:43.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3284" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":103,"skipped":1730,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:43.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:57:43.278: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c" in namespace "projected-5334" to be "success or failure" Mar 8 23:57:43.314: INFO: Pod "downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.072323ms Mar 8 23:57:45.319: INFO: Pod "downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040209528s STEP: Saw pod success Mar 8 23:57:45.319: INFO: Pod "downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c" satisfied condition "success or failure" Mar 8 23:57:45.321: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c container client-container: STEP: delete the pod Mar 8 23:57:45.339: INFO: Waiting for pod downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c to disappear Mar 8 23:57:45.343: INFO: Pod downwardapi-volume-4c8368d0-5d35-426f-9a36-ca2b3c4c236c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5334" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":104,"skipped":1736,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:45.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-d68a729a-1c34-40a5-b942-67709fbe226b STEP: Creating a pod to test consume secrets Mar 8 23:57:45.454: INFO: Waiting up to 5m0s for pod "pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df" in namespace "secrets-5709" to be "success or failure" Mar 8 23:57:45.469: INFO: Pod "pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df": Phase="Pending", Reason="", readiness=false. Elapsed: 15.870505ms Mar 8 23:57:47.473: INFO: Pod "pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019832232s Mar 8 23:57:49.477: INFO: Pod "pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023815398s STEP: Saw pod success Mar 8 23:57:49.477: INFO: Pod "pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df" satisfied condition "success or failure" Mar 8 23:57:49.480: INFO: Trying to get logs from node latest-worker pod pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df container secret-volume-test: STEP: delete the pod Mar 8 23:57:49.495: INFO: Waiting for pod pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df to disappear Mar 8 23:57:49.499: INFO: Pod pod-secrets-cc02ec36-7d39-4988-8dbc-1bab859213df no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:49.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5709" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1751,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:49.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-3af432c5-f30b-492b-b146-461b321f6ade STEP: Creating a pod to test consume secrets Mar 8 23:57:49.631: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650" in namespace "projected-2619" to be "success or failure" Mar 8 23:57:49.637: INFO: Pod "pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041733ms Mar 8 23:57:51.641: INFO: Pod "pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010109737s Mar 8 23:57:53.645: INFO: Pod "pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014026037s STEP: Saw pod success Mar 8 23:57:53.645: INFO: Pod "pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650" satisfied condition "success or failure" Mar 8 23:57:53.648: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650 container projected-secret-volume-test: STEP: delete the pod Mar 8 23:57:53.664: INFO: Waiting for pod pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650 to disappear Mar 8 23:57:53.669: INFO: Pod pod-projected-secrets-7e1e513c-c966-41f6-a01d-c7fc38c78650 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:53.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2619" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":106,"skipped":1753,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:53.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 23:57:53.795: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9557 /api/v1/namespaces/watch-9557/configmaps/e2e-watch-test-resource-version b7ad477c-19bd-4992-84f6-8d8b6efc5480 137207 0 2020-03-08 23:57:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 23:57:53.795: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9557 /api/v1/namespaces/watch-9557/configmaps/e2e-watch-test-resource-version b7ad477c-19bd-4992-84f6-8d8b6efc5480 137208 0 2020-03-08 23:57:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:53.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9557" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":107,"skipped":1767,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:53.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:57:53.858: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:57:55.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5177" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:57:55.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:07.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7134" for this suite. • [SLOW TEST:11.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":109,"skipped":1800,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:07.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:14.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7930" for this suite. • [SLOW TEST:7.104 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":110,"skipped":1845,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 8 23:58:14.281: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 23:58:14.320: INFO: Waiting for terminating namespaces to be deleted... Mar 8 23:58:14.322: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 23:58:14.327: INFO: pod-exec-websocket-7b8fb440-fbf5-427e-8d0a-fb69af5fb904 from pods-5177 started at 2020-03-08 23:57:53 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.327: INFO: Container main ready: true, restart count 0 Mar 8 23:58:14.327: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.327: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 23:58:14.327: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.327: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 23:58:14.327: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 23:58:14.333: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.333: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 23:58:14.333: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.333: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 23:58:14.333: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 23:58:14.333: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8f464ea9-91ef-4449-aafe-308412dae882 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8f464ea9-91ef-4449-aafe-308412dae882 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8f464ea9-91ef-4449-aafe-308412dae882 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:20.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-978" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:6.252 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":111,"skipped":1848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:20.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Mar 8 23:58:20.549: INFO: Waiting up to 5m0s for pod "var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b" in namespace "var-expansion-1701" to be "success or failure" Mar 8 23:58:20.591: INFO: Pod "var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.327292ms Mar 8 23:58:22.595: INFO: Pod "var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.045273238s STEP: Saw pod success Mar 8 23:58:22.595: INFO: Pod "var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b" satisfied condition "success or failure" Mar 8 23:58:22.598: INFO: Trying to get logs from node latest-worker2 pod var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b container dapi-container: STEP: delete the pod Mar 8 23:58:22.657: INFO: Waiting for pod var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b to disappear Mar 8 23:58:22.681: INFO: Pod var-expansion-2ebb983f-f3e6-4c20-9cb1-ebff5c8aa31b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:22.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1701" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1897,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:22.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 8 23:58:22.786: INFO: Waiting up to 5m0s for pod "downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5" in namespace "downward-api-7046" to be "success or failure" Mar 8 23:58:22.789: INFO: Pod "downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892363ms Mar 8 23:58:24.810: INFO: Pod "downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024251549s Mar 8 23:58:26.813: INFO: Pod "downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02688455s STEP: Saw pod success Mar 8 23:58:26.813: INFO: Pod "downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5" satisfied condition "success or failure" Mar 8 23:58:26.815: INFO: Trying to get logs from node latest-worker2 pod downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5 container dapi-container: STEP: delete the pod Mar 8 23:58:26.830: INFO: Waiting for pod downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5 to disappear Mar 8 23:58:26.835: INFO: Pod downward-api-b2070dcb-d94c-446d-b93f-2dd610c6acb5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7046" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:26.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 8 23:58:31.449: INFO: Successfully updated pod "annotationupdate41e72ab6-00a4-40bf-864b-658ff8ad9af9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:33.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1737" for this suite. • [SLOW TEST:6.648 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1944,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:33.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:58:33.993: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:58:36.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308713, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308713, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308714, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308713, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:58:39.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:39.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6918" for this suite. STEP: Destroying namespace "webhook-6918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.707 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":115,"skipped":1952,"failed":0} [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:39.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-2911 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2911 to expose endpoints map[] Mar 8 23:58:39.349: INFO: successfully validated that service multi-endpoint-test in namespace services-2911 exposes endpoints map[] (41.769328ms elapsed) STEP: Creating pod pod1 in namespace services-2911 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2911 to expose endpoints map[pod1:[100]] Mar 8 23:58:41.381: INFO: successfully validated that service multi-endpoint-test in namespace services-2911 exposes endpoints map[pod1:[100]] (2.026966191s elapsed) STEP: Creating pod pod2 in namespace services-2911 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2911 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 23:58:43.427: INFO: successfully validated that service multi-endpoint-test in namespace services-2911 exposes endpoints map[pod1:[100] pod2:[101]] (2.0422714s elapsed) STEP: Deleting pod pod1 in namespace services-2911 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2911 to expose endpoints map[pod2:[101]] Mar 8 23:58:44.458: INFO: successfully validated that service multi-endpoint-test in namespace services-2911 exposes endpoints map[pod2:[101]] (1.02624823s elapsed) STEP: Deleting pod pod2 in namespace services-2911 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2911 to expose endpoints map[] Mar 8 23:58:45.488: INFO: successfully validated that service multi-endpoint-test in namespace services-2911 exposes endpoints map[] (1.024815605s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:45.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2911" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.356 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":116,"skipped":1952,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:45.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-c92be1a7-2e93-4b9c-90b9-178cebed5001 STEP: Creating a pod to test consume configMaps Mar 8 23:58:45.607: INFO: Waiting up to 5m0s for pod "pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e" in namespace "configmap-477" to be "success or failure" Mar 8 23:58:45.611: INFO: Pod "pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204123ms Mar 8 23:58:47.615: INFO: Pod "pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008452532s Mar 8 23:58:49.619: INFO: Pod "pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012412614s STEP: Saw pod success Mar 8 23:58:49.619: INFO: Pod "pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e" satisfied condition "success or failure" Mar 8 23:58:49.622: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e container configmap-volume-test: STEP: delete the pod Mar 8 23:58:49.655: INFO: Waiting for pod pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e to disappear Mar 8 23:58:49.673: INFO: Pod pod-configmaps-7403e014-65bc-47a4-ae2b-8cf98cf3156e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:58:49.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-477" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":117,"skipped":1958,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:58:49.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:58:50.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:58:52.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308730, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308730, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308730, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308730, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:58:55.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7971" for this suite. STEP: Destroying namespace "webhook-7971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.425 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":118,"skipped":1971,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:06.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-eb6a6c05-a63d-431b-9920-2e2b1974457d STEP: Creating a pod to test consume secrets Mar 8 23:59:06.313: INFO: Waiting up to 5m0s for pod "pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643" in namespace "secrets-906" to be "success or failure" Mar 8 23:59:06.318: INFO: Pod "pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33129ms Mar 8 23:59:08.321: INFO: Pod "pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008022987s STEP: Saw pod success Mar 8 23:59:08.321: INFO: Pod "pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643" satisfied condition "success or failure" Mar 8 23:59:08.324: INFO: Trying to get logs from node latest-worker pod pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643 container secret-volume-test: STEP: delete the pod Mar 8 23:59:08.343: INFO: Waiting for pod pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643 to disappear Mar 8 23:59:08.348: INFO: Pod pod-secrets-b4dc4cd0-b179-403b-82d4-b2f5971f7643 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:08.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-906" for this suite. STEP: Destroying namespace "secret-namespace-8428" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:08.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:08.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5846" for this suite. STEP: Destroying namespace "nspatchtest-8d078c1d-b25b-46cb-b98e-e8e499a7a1e6-9817" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":120,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:08.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:59:08.581: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 23:59:13.584: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 23:59:13.584: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 23:59:15.588: INFO: Creating deployment "test-rollover-deployment" Mar 8 23:59:15.598: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 23:59:17.604: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 23:59:17.611: INFO: Ensure that both replica sets have 1 created replica Mar 8 23:59:17.617: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 23:59:17.624: INFO: Updating deployment test-rollover-deployment Mar 8 23:59:17.624: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 23:59:19.632: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 23:59:19.637: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 23:59:19.643: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:19.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308757, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:21.650: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:21.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:23.650: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:23.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:25.651: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:25.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:27.651: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:27.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:29.651: INFO: all replica sets need to contain the pod-template-hash label Mar 8 23:59:29.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308755, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 23:59:31.651: INFO: Mar 8 23:59:31.651: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 23:59:31.660: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3298 /apis/apps/v1/namespaces/deployment-3298/deployments/test-rollover-deployment 9e2e86ca-0be6-4b06-8c1f-b73f34270cd8 138048 2 2020-03-08 23:59:15 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b46748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 23:59:15 +0000 UTC,LastTransitionTime:2020-03-08 23:59:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-08 23:59:29 +0000 UTC,LastTransitionTime:2020-03-08 23:59:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 23:59:31.663: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-3298 /apis/apps/v1/namespaces/deployment-3298/replicasets/test-rollover-deployment-574d6dfbff d6fbb132-ce04-4185-9d17-da92a5a6c26d 138037 2 2020-03-08 23:59:17 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9e2e86ca-0be6-4b06-8c1f-b73f34270cd8 0xc002b46ba7 0xc002b46ba8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b46c18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:59:31.663: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 23:59:31.664: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3298 /apis/apps/v1/namespaces/deployment-3298/replicasets/test-rollover-controller 4eb42e1a-807d-4c67-b92f-1ac4beb618e5 138046 2 2020-03-08 23:59:08 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9e2e86ca-0be6-4b06-8c1f-b73f34270cd8 0xc002b46ad7 0xc002b46ad8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b46b38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:59:31.664: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3298 /apis/apps/v1/namespaces/deployment-3298/replicasets/test-rollover-deployment-f6c94f66c 036b179a-276e-494c-a54b-b7df369d2c86 137990 2 2020-03-08 23:59:15 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9e2e86ca-0be6-4b06-8c1f-b73f34270cd8 0xc002b46c80 0xc002b46c81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b46cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 23:59:31.667: INFO: Pod "test-rollover-deployment-574d6dfbff-lv5r5" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-lv5r5 test-rollover-deployment-574d6dfbff- deployment-3298 /api/v1/namespaces/deployment-3298/pods/test-rollover-deployment-574d6dfbff-lv5r5 93a68ef2-0340-4c07-8730-bd0ba81f1632 138004 0 2020-03-08 23:59:17 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d6fbb132-ce04-4185-9d17-da92a5a6c26d 0xc0032c95d7 0xc0032c95d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jdhxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jdhxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jdhxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:59:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:59:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:59:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 23:59:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.88,StartTime:2020-03-08 23:59:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 23:59:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://951879abb9de013e9a23c4a7aa5ccda86efd97bb8961bfda7d09ef21972290b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:31.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3298" for this suite. • [SLOW TEST:23.176 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":121,"skipped":2036,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:31.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 23:59:32.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 23:59:34.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308772, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308772, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308772, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719308772, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 23:59:37.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:37.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4524" for this suite. STEP: Destroying namespace "webhook-4524-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.358 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":122,"skipped":2040,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:38.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:41.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7964" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":123,"skipped":2043,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:41.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 23:59:41.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6065' Mar 8 23:59:41.491: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 23:59:41.491: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604 Mar 8 23:59:43.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6065' Mar 8 23:59:43.640: INFO: stderr: "" Mar 8 23:59:43.640: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:43.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6065" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":280,"completed":124,"skipped":2044,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:43.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:43.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7560" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":125,"skipped":2048,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:43.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:59:43.823: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.623804ms) Mar 8 23:59:43.826: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.718326ms) Mar 8 23:59:43.828: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.664239ms) Mar 8 23:59:43.832: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.366214ms) Mar 8 23:59:43.834: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.701838ms) Mar 8 23:59:43.837: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.726828ms) Mar 8 23:59:43.840: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.603359ms) Mar 8 23:59:43.843: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.863891ms) Mar 8 23:59:43.846: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.24768ms) Mar 8 23:59:43.849: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.658622ms) Mar 8 23:59:43.851: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.688612ms) Mar 8 23:59:43.854: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.520974ms) Mar 8 23:59:43.857: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.640713ms) Mar 8 23:59:43.859: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.672498ms) Mar 8 23:59:43.862: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.556575ms) Mar 8 23:59:43.864: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.522087ms) Mar 8 23:59:43.867: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.471128ms) Mar 8 23:59:43.870: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.566765ms) Mar 8 23:59:43.872: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.342832ms) Mar 8 23:59:43.875: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.574829ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:43.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-304" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":280,"completed":126,"skipped":2051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:43.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 8 23:59:43.944: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560" in namespace "downward-api-4569" to be "success or failure" Mar 8 23:59:43.968: INFO: Pod "downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560": Phase="Pending", Reason="", readiness=false. Elapsed: 23.266909ms Mar 8 23:59:45.971: INFO: Pod "downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026864334s Mar 8 23:59:47.976: INFO: Pod "downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031486708s STEP: Saw pod success Mar 8 23:59:47.976: INFO: Pod "downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560" satisfied condition "success or failure" Mar 8 23:59:47.979: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560 container client-container: STEP: delete the pod Mar 8 23:59:48.007: INFO: Waiting for pod downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560 to disappear Mar 8 23:59:48.017: INFO: Pod downwardapi-volume-17c3030d-a330-4dbb-a7d3-bd5696443560 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:48.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4569" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":2081,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:48.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 8 23:59:48.076: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:48.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4970" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":128,"skipped":2085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:48.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-1c08ab35-e9bf-44ee-b2b2-1fc763e4f37b STEP: Creating configMap with name cm-test-opt-upd-b19a2803-a9e8-4e9d-a409-98e58e01f9be STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1c08ab35-e9bf-44ee-b2b2-1fc763e4f37b STEP: Updating configmap cm-test-opt-upd-b19a2803-a9e8-4e9d-a409-98e58e01f9be STEP: Creating configMap with name cm-test-opt-create-ff7f7b72-a847-4211-98d9-542e0da019ae STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 8 23:59:56.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5257" for this suite. • [SLOW TEST:8.220 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":2114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 8 23:59:56.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 23:59:57.051: INFO: Waiting up to 5m0s for pod "pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c" in namespace "emptydir-9835" to be "success or failure" Mar 8 23:59:57.072: INFO: Pod "pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.427628ms Mar 8 23:59:59.076: INFO: Pod "pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025372266s Mar 9 00:00:01.080: INFO: Pod "pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029578621s STEP: Saw pod success Mar 9 00:00:01.080: INFO: Pod "pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c" satisfied condition "success or failure" Mar 9 00:00:01.084: INFO: Trying to get logs from node latest-worker2 pod pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c container test-container: STEP: delete the pod Mar 9 00:00:01.115: INFO: Waiting for pod pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c to disappear Mar 9 00:00:01.120: INFO: Pod pod-93b2ea4a-8070-441e-b1e1-a6a6eacf992c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:00:01.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9835" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":2150,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:00:01.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-58k2 STEP: Creating a pod to test atomic-volume-subpath Mar 9 00:00:01.200: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-58k2" in namespace "subpath-5434" to be "success or failure" Mar 9 00:00:01.215: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.032514ms Mar 9 00:00:03.219: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 2.018397548s Mar 9 00:00:05.229: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 4.028304625s Mar 9 00:00:07.232: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 6.031457844s Mar 9 00:00:09.236: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 8.035346999s Mar 9 00:00:11.239: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 10.038561111s Mar 9 00:00:13.257: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 12.056284322s Mar 9 00:00:15.261: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 14.060845773s Mar 9 00:00:17.265: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 16.064780147s Mar 9 00:00:19.269: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 18.069047169s Mar 9 00:00:21.303: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Running", Reason="", readiness=true. Elapsed: 20.102708272s Mar 9 00:00:23.327: INFO: Pod "pod-subpath-test-configmap-58k2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.126447668s STEP: Saw pod success Mar 9 00:00:23.327: INFO: Pod "pod-subpath-test-configmap-58k2" satisfied condition "success or failure" Mar 9 00:00:23.330: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-58k2 container test-container-subpath-configmap-58k2: STEP: delete the pod Mar 9 00:00:23.364: INFO: Waiting for pod pod-subpath-test-configmap-58k2 to disappear Mar 9 00:00:23.395: INFO: Pod pod-subpath-test-configmap-58k2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-58k2 Mar 9 00:00:23.395: INFO: Deleting pod "pod-subpath-test-configmap-58k2" in namespace "subpath-5434" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:00:23.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5434" for this suite. • [SLOW TEST:22.277 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":131,"skipped":2172,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:00:23.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 9 00:00:27.508: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 00:00:27.529: INFO: Pod pod-with-poststart-http-hook still exists Mar 9 00:00:29.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 00:00:29.534: INFO: Pod pod-with-poststart-http-hook still exists Mar 9 00:00:31.529: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 00:00:31.533: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:00:31.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5715" for this suite. • [SLOW TEST:8.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":132,"skipped":2179,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:00:31.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 9 00:00:31.587: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 00:00:31.600: INFO: Waiting for terminating namespaces to be deleted... Mar 9 00:00:31.602: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 9 00:00:31.607: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.607: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 00:00:31.607: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.607: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:00:31.607: INFO: pod-handle-http-request from container-lifecycle-hook-5715 started at 2020-03-09 00:00:23 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.607: INFO: Container pod-handle-http-request ready: true, restart count 0 Mar 9 00:00:31.607: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 9 00:00:31.612: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.612: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 00:00:31.612: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.612: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:00:31.612: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 9 00:00:31.612: INFO: Container coredns ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-59a03a17-cfe8-450a-b431-f0703c85eba1 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-59a03a17-cfe8-450a-b431-f0703c85eba1 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-59a03a17-cfe8-450a-b431-f0703c85eba1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:05:37.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2721" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:306.262 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":133,"skipped":2200,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:05:37.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:05:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3778" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":134,"skipped":2201,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:05:37.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:05:42.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6807" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2213,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:05:42.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Mar 9 00:05:42.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860' Mar 9 00:05:42.398: INFO: stderr: "" Mar 9 00:05:42.398: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:05:42.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4860' Mar 9 00:05:42.526: INFO: stderr: "" Mar 9 00:05:42.526: INFO: stdout: "update-demo-nautilus-8hx5b update-demo-nautilus-tbnps " Mar 9 00:05:42.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hx5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:05:42.600: INFO: stderr: "" Mar 9 00:05:42.600: INFO: stdout: "" Mar 9 00:05:42.600: INFO: update-demo-nautilus-8hx5b is created but not running Mar 9 00:05:47.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4860' Mar 9 00:05:47.734: INFO: stderr: "" Mar 9 00:05:47.734: INFO: stdout: "update-demo-nautilus-8hx5b update-demo-nautilus-tbnps " Mar 9 00:05:47.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hx5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:05:47.829: INFO: stderr: "" Mar 9 00:05:47.829: INFO: stdout: "true" Mar 9 00:05:47.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hx5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:05:47.924: INFO: stderr: "" Mar 9 00:05:47.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:05:47.924: INFO: validating pod update-demo-nautilus-8hx5b Mar 9 00:05:47.927: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:05:47.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:05:47.927: INFO: update-demo-nautilus-8hx5b is verified up and running Mar 9 00:05:47.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tbnps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:05:48.001: INFO: stderr: "" Mar 9 00:05:48.001: INFO: stdout: "true" Mar 9 00:05:48.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tbnps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:05:48.099: INFO: stderr: "" Mar 9 00:05:48.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:05:48.100: INFO: validating pod update-demo-nautilus-tbnps Mar 9 00:05:48.108: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:05:48.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:05:48.108: INFO: update-demo-nautilus-tbnps is verified up and running STEP: rolling-update to new replication controller Mar 9 00:05:48.110: INFO: scanned /root for discovery docs: Mar 9 00:05:48.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4860' Mar 9 00:06:10.597: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 9 00:06:10.597: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:06:10.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4860' Mar 9 00:06:10.690: INFO: stderr: "" Mar 9 00:06:10.690: INFO: stdout: "update-demo-kitten-rsjfp update-demo-kitten-vvwn6 update-demo-nautilus-tbnps " STEP: Replicas for name=update-demo: expected=2 actual=3 Mar 9 00:06:15.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4860' Mar 9 00:06:15.821: INFO: stderr: "" Mar 9 00:06:15.821: INFO: stdout: "update-demo-kitten-rsjfp update-demo-kitten-vvwn6 " Mar 9 00:06:15.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-rsjfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:06:15.922: INFO: stderr: "" Mar 9 00:06:15.922: INFO: stdout: "true" Mar 9 00:06:15.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-rsjfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:06:16.022: INFO: stderr: "" Mar 9 00:06:16.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 9 00:06:16.022: INFO: validating pod update-demo-kitten-rsjfp Mar 9 00:06:16.026: INFO: got data: { "image": "kitten.jpg" } Mar 9 00:06:16.026: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 9 00:06:16.026: INFO: update-demo-kitten-rsjfp is verified up and running Mar 9 00:06:16.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-vvwn6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:06:16.102: INFO: stderr: "" Mar 9 00:06:16.102: INFO: stdout: "true" Mar 9 00:06:16.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-vvwn6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4860' Mar 9 00:06:16.176: INFO: stderr: "" Mar 9 00:06:16.176: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 9 00:06:16.176: INFO: validating pod update-demo-kitten-vvwn6 Mar 9 00:06:16.179: INFO: got data: { "image": "kitten.jpg" } Mar 9 00:06:16.179: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 9 00:06:16.179: INFO: update-demo-kitten-vvwn6 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:16.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4860" for this suite. • [SLOW TEST:34.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":136,"skipped":2222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:16.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:06:16.261: INFO: Creating ReplicaSet my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63 Mar 9 00:06:16.269: INFO: Pod name my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63: Found 0 pods out of 1 Mar 9 00:06:21.290: INFO: Pod name my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63: Found 1 pods out of 1 Mar 9 00:06:21.290: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63" is running Mar 9 00:06:21.305: INFO: Pod "my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63-hkd4h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:06:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:06:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:06:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:06:16 +0000 UTC Reason: Message:}]) Mar 9 00:06:21.305: INFO: Trying to dial the pod Mar 9 00:06:26.326: INFO: Controller my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63: Got expected result from replica 1 [my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63-hkd4h]: "my-hostname-basic-ff5682d2-d10d-4190-a227-8aaba67bac63-hkd4h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:26.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3242" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":137,"skipped":2247,"failed":0} [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:26.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:26.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4643" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":138,"skipped":2247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:26.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-6011 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6011 STEP: Deleting pre-stop pod Mar 9 00:06:35.610: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6011" for this suite. • [SLOW TEST:9.222 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":139,"skipped":2287,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:35.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:06:35.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f" in namespace "downward-api-2199" to be "success or failure" Mar 9 00:06:35.808: INFO: Pod "downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608319ms Mar 9 00:06:37.812: INFO: Pod "downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006437116s STEP: Saw pod success Mar 9 00:06:37.812: INFO: Pod "downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f" satisfied condition "success or failure" Mar 9 00:06:37.815: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f container client-container: STEP: delete the pod Mar 9 00:06:37.858: INFO: Waiting for pod downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f to disappear Mar 9 00:06:37.863: INFO: Pod downwardapi-volume-48a966df-fd81-48cd-bbb4-22255af9779f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:37.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2199" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":140,"skipped":2288,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Mar 9 00:06:37.919: INFO: Waiting up to 5m0s for pod "var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb" in namespace "var-expansion-6900" to be "success or failure" Mar 9 00:06:37.923: INFO: Pod "var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.811468ms Mar 9 00:06:39.938: INFO: Pod "var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018380464s STEP: Saw pod success Mar 9 00:06:39.938: INFO: Pod "var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb" satisfied condition "success or failure" Mar 9 00:06:39.940: INFO: Trying to get logs from node latest-worker pod var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb container dapi-container: STEP: delete the pod Mar 9 00:06:39.985: INFO: Waiting for pod var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb to disappear Mar 9 00:06:39.993: INFO: Pod var-expansion-aa4a56e6-262e-4d5b-9404-ddf29763f2fb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:39.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6900" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:40.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:06:40.084: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 9 00:06:40.090: INFO: Number of nodes with available pods: 0 Mar 9 00:06:40.090: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 9 00:06:40.129: INFO: Number of nodes with available pods: 0 Mar 9 00:06:40.129: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:41.133: INFO: Number of nodes with available pods: 0 Mar 9 00:06:41.133: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:42.133: INFO: Number of nodes with available pods: 1 Mar 9 00:06:42.133: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 9 00:06:42.167: INFO: Number of nodes with available pods: 1 Mar 9 00:06:42.168: INFO: Number of running nodes: 0, number of available pods: 1 Mar 9 00:06:43.170: INFO: Number of nodes with available pods: 0 Mar 9 00:06:43.170: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 9 00:06:43.207: INFO: Number of nodes with available pods: 0 Mar 9 00:06:43.207: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:44.211: INFO: Number of nodes with available pods: 0 Mar 9 00:06:44.211: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:45.219: INFO: Number of nodes with available pods: 0 Mar 9 00:06:45.219: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:46.216: INFO: Number of nodes with available pods: 0 Mar 9 00:06:46.216: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:47.211: INFO: Number of nodes with available pods: 0 Mar 9 00:06:47.212: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:48.211: INFO: Number of nodes with available pods: 0 Mar 9 00:06:48.211: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:49.212: INFO: Number of nodes with available pods: 0 Mar 9 00:06:49.212: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:50.212: INFO: Number of nodes with available pods: 0 Mar 9 00:06:50.212: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:51.212: INFO: Number of nodes with available pods: 0 Mar 9 00:06:51.212: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:52.212: INFO: Number of nodes with available pods: 0 Mar 9 00:06:52.212: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:53.211: INFO: Number of nodes with available pods: 0 Mar 9 00:06:53.211: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:54.211: INFO: Number of nodes with available pods: 0 Mar 9 00:06:54.211: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:06:55.211: INFO: Number of nodes with available pods: 1 Mar 9 00:06:55.211: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1856, will wait for the garbage collector to delete the pods Mar 9 00:06:55.273: INFO: Deleting DaemonSet.extensions daemon-set took: 4.624138ms Mar 9 00:06:55.573: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.223759ms Mar 9 00:06:58.676: INFO: Number of nodes with available pods: 0 Mar 9 00:06:58.676: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 00:06:58.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1856/daemonsets","resourceVersion":"140136"},"items":null} Mar 9 00:06:58.681: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1856/pods","resourceVersion":"140136"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:06:58.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1856" for this suite. • [SLOW TEST:18.726 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":142,"skipped":2333,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:06:58.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-0c642878-2a71-4a00-9b16-30bbbb851b3e in namespace container-probe-2352 Mar 9 00:07:00.850: INFO: Started pod busybox-0c642878-2a71-4a00-9b16-30bbbb851b3e in namespace container-probe-2352 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 00:07:00.853: INFO: Initial restart count of pod busybox-0c642878-2a71-4a00-9b16-30bbbb851b3e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:01.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2352" for this suite. • [SLOW TEST:242.697 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":143,"skipped":2334,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:01.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:11:01.504: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-deb91a07-3861-40e9-b62c-0ca72efa95c5" in namespace "security-context-test-2493" to be "success or failure" Mar 9 00:11:01.521: INFO: Pod "busybox-readonly-false-deb91a07-3861-40e9-b62c-0ca72efa95c5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.367258ms Mar 9 00:11:03.525: INFO: Pod "busybox-readonly-false-deb91a07-3861-40e9-b62c-0ca72efa95c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021882949s Mar 9 00:11:03.525: INFO: Pod "busybox-readonly-false-deb91a07-3861-40e9-b62c-0ca72efa95c5" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:03.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2493" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":2334,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:03.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-8d201a0e-6ac5-4302-a617-efcca9bed3d5 in namespace container-probe-2312 Mar 9 00:11:05.613: INFO: Started pod liveness-8d201a0e-6ac5-4302-a617-efcca9bed3d5 in namespace container-probe-2312 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 00:11:05.616: INFO: Initial restart count of pod liveness-8d201a0e-6ac5-4302-a617-efcca9bed3d5 is 0 Mar 9 00:11:21.650: INFO: Restart count of pod container-probe-2312/liveness-8d201a0e-6ac5-4302-a617-efcca9bed3d5 is now 1 (16.034625401s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:21.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2312" for this suite. • [SLOW TEST:18.180 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2338,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:21.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:11:21.841: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9450581b-5dcb-447e-90b9-9271ad7b7852" in namespace "security-context-test-9829" to be "success or failure" Mar 9 00:11:21.849: INFO: Pod "alpine-nnp-false-9450581b-5dcb-447e-90b9-9271ad7b7852": Phase="Pending", Reason="", readiness=false. Elapsed: 7.992233ms Mar 9 00:11:23.853: INFO: Pod "alpine-nnp-false-9450581b-5dcb-447e-90b9-9271ad7b7852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012030493s Mar 9 00:11:25.858: INFO: Pod "alpine-nnp-false-9450581b-5dcb-447e-90b9-9271ad7b7852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016122916s Mar 9 00:11:25.858: INFO: Pod "alpine-nnp-false-9450581b-5dcb-447e-90b9-9271ad7b7852" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:25.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9829" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2340,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:25.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:11:25.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version' Mar 9 00:11:26.085: INFO: stderr: "" Mar 9 00:11:26.085: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:26.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5039" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":147,"skipped":2344,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:26.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-661008e9-42f0-44db-8504-60f8a4083ad0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-661008e9-42f0-44db-8504-60f8a4083ad0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:30.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5668" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":148,"skipped":2347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:30.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:11:30.669: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 00:11:32.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719309490, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719309490, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719309490, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719309490, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:11:35.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:35.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6815" for this suite. STEP: Destroying namespace "webhook-6815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.660 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":149,"skipped":2371,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:35.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 9 00:11:35.965: INFO: namespace kubectl-6961 Mar 9 00:11:35.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6961' Mar 9 00:11:38.401: INFO: stderr: "" Mar 9 00:11:38.401: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 9 00:11:39.405: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 00:11:39.405: INFO: Found 0 / 1 Mar 9 00:11:40.404: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 00:11:40.404: INFO: Found 1 / 1 Mar 9 00:11:40.404: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 9 00:11:40.407: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 00:11:40.407: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 9 00:11:40.407: INFO: wait on agnhost-master startup in kubectl-6961 Mar 9 00:11:40.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-w5wbj agnhost-master --namespace=kubectl-6961' Mar 9 00:11:40.518: INFO: stderr: "" Mar 9 00:11:40.518: INFO: stdout: "Paused\n" STEP: exposing RC Mar 9 00:11:40.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6961' Mar 9 00:11:40.627: INFO: stderr: "" Mar 9 00:11:40.627: INFO: stdout: "service/rm2 exposed\n" Mar 9 00:11:40.633: INFO: Service rm2 in namespace kubectl-6961 found. STEP: exposing service Mar 9 00:11:42.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6961' Mar 9 00:11:42.731: INFO: stderr: "" Mar 9 00:11:42.731: INFO: stdout: "service/rm3 exposed\n" Mar 9 00:11:42.736: INFO: Service rm3 in namespace kubectl-6961 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:44.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6961" for this suite. • [SLOW TEST:8.863 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":150,"skipped":2383,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:44.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-956757ee-499e-4272-aaae-5af2ce656143 STEP: Creating a pod to test consume configMaps Mar 9 00:11:44.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-f432188b-c77b-486f-9816-ad9223266923" in namespace "configmap-6267" to be "success or failure" Mar 9 00:11:44.808: INFO: Pod "pod-configmaps-f432188b-c77b-486f-9816-ad9223266923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773631ms Mar 9 00:11:46.822: INFO: Pod "pod-configmaps-f432188b-c77b-486f-9816-ad9223266923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016784079s Mar 9 00:11:48.827: INFO: Pod "pod-configmaps-f432188b-c77b-486f-9816-ad9223266923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020942719s STEP: Saw pod success Mar 9 00:11:48.827: INFO: Pod "pod-configmaps-f432188b-c77b-486f-9816-ad9223266923" satisfied condition "success or failure" Mar 9 00:11:48.830: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f432188b-c77b-486f-9816-ad9223266923 container configmap-volume-test: STEP: delete the pod Mar 9 00:11:48.896: INFO: Waiting for pod pod-configmaps-f432188b-c77b-486f-9816-ad9223266923 to disappear Mar 9 00:11:48.906: INFO: Pod pod-configmaps-f432188b-c77b-486f-9816-ad9223266923 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:48.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6267" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":151,"skipped":2384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Mar 9 00:11:49.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config cluster-info' Mar 9 00:11:49.134: INFO: stderr: "" Mar 9 00:11:49.134: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:49.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8673" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":280,"completed":152,"skipped":2460,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:49.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:11:49.187: INFO: Waiting up to 5m0s for pod "busybox-user-65534-32419acb-0b1f-456a-b12f-69e946653f3f" in namespace "security-context-test-6473" to be "success or failure" Mar 9 00:11:49.191: INFO: Pod "busybox-user-65534-32419acb-0b1f-456a-b12f-69e946653f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326303ms Mar 9 00:11:51.195: INFO: Pod "busybox-user-65534-32419acb-0b1f-456a-b12f-69e946653f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007871495s Mar 9 00:11:51.195: INFO: Pod "busybox-user-65534-32419acb-0b1f-456a-b12f-69e946653f3f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:51.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6473" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2480,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:51.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 9 00:11:51.273: INFO: Waiting up to 5m0s for pod "downward-api-f5cf3864-39a9-491b-97ce-fac68249f945" in namespace "downward-api-6491" to be "success or failure" Mar 9 00:11:51.276: INFO: Pod "downward-api-f5cf3864-39a9-491b-97ce-fac68249f945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964922ms Mar 9 00:11:53.295: INFO: Pod "downward-api-f5cf3864-39a9-491b-97ce-fac68249f945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021893849s STEP: Saw pod success Mar 9 00:11:53.295: INFO: Pod "downward-api-f5cf3864-39a9-491b-97ce-fac68249f945" satisfied condition "success or failure" Mar 9 00:11:53.298: INFO: Trying to get logs from node latest-worker pod downward-api-f5cf3864-39a9-491b-97ce-fac68249f945 container dapi-container: STEP: delete the pod Mar 9 00:11:53.327: INFO: Waiting for pod downward-api-f5cf3864-39a9-491b-97ce-fac68249f945 to disappear Mar 9 00:11:53.336: INFO: Pod downward-api-f5cf3864-39a9-491b-97ce-fac68249f945 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6491" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":154,"skipped":2487,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:53.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:11:57.422: INFO: DNS probes using dns-5692/dns-test-1e6158e9-dbc6-4289-84d0-e0f5b214321d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:57.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5692" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":280,"completed":155,"skipped":2498,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:57.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:11:57.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228" in namespace "projected-1362" to be "success or failure" Mar 9 00:11:57.549: INFO: Pod "downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228": Phase="Pending", Reason="", readiness=false. Elapsed: 18.341099ms Mar 9 00:11:59.552: INFO: Pod "downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022088989s STEP: Saw pod success Mar 9 00:11:59.553: INFO: Pod "downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228" satisfied condition "success or failure" Mar 9 00:11:59.556: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228 container client-container: STEP: delete the pod Mar 9 00:11:59.590: INFO: Waiting for pod downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228 to disappear Mar 9 00:11:59.599: INFO: Pod downwardapi-volume-fd8a136b-7ead-45e8-bdbe-c534e25fd228 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:11:59.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1362" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":156,"skipped":2499,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:11:59.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-308018f1-788d-49a2-a3c2-a63bbccd4fb1 STEP: Creating a pod to test consume secrets Mar 9 00:11:59.738: INFO: Waiting up to 5m0s for pod "pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353" in namespace "secrets-7509" to be "success or failure" Mar 9 00:11:59.743: INFO: Pod "pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.764982ms Mar 9 00:12:01.747: INFO: Pod "pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008880143s STEP: Saw pod success Mar 9 00:12:01.747: INFO: Pod "pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353" satisfied condition "success or failure" Mar 9 00:12:01.751: INFO: Trying to get logs from node latest-worker pod pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353 container secret-volume-test: STEP: delete the pod Mar 9 00:12:01.802: INFO: Waiting for pod pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353 to disappear Mar 9 00:12:01.806: INFO: Pod pod-secrets-2da64a8c-1c00-4594-86ce-0b289ae2a353 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:12:01.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7509" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:12:01.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 9 00:12:01.889: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:12:05.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4170" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":158,"skipped":2564,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:12:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-46d8e2fd-934e-457a-bda3-89597015190c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:12:05.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6937" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":159,"skipped":2568,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:12:05.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:12:05.383: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 9 00:12:08.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4646 create -f -' Mar 9 00:12:10.625: INFO: stderr: "" Mar 9 00:12:10.625: INFO: stdout: "e2e-test-crd-publish-openapi-31-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 9 00:12:10.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4646 delete e2e-test-crd-publish-openapi-31-crds test-cr' Mar 9 00:12:10.764: INFO: stderr: "" Mar 9 00:12:10.764: INFO: stdout: "e2e-test-crd-publish-openapi-31-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 9 00:12:10.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4646 apply -f -' Mar 9 00:12:11.053: INFO: stderr: "" Mar 9 00:12:11.054: INFO: stdout: "e2e-test-crd-publish-openapi-31-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 9 00:12:11.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4646 delete e2e-test-crd-publish-openapi-31-crds test-cr' Mar 9 00:12:11.164: INFO: stderr: "" Mar 9 00:12:11.164: INFO: stdout: "e2e-test-crd-publish-openapi-31-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 9 00:12:11.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-31-crds' Mar 9 00:12:11.415: INFO: stderr: "" Mar 9 00:12:11.415: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-31-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:12:13.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4646" for this suite. • [SLOW TEST:7.902 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":160,"skipped":2570,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:12:13.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-ea9d4b28-392d-4863-9f30-00260361fac5 in namespace container-probe-2438 Mar 9 00:12:15.297: INFO: Started pod test-webserver-ea9d4b28-392d-4863-9f30-00260361fac5 in namespace container-probe-2438 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 00:12:15.299: INFO: Initial restart count of pod test-webserver-ea9d4b28-392d-4863-9f30-00260361fac5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:16.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2438" for this suite. • [SLOW TEST:243.038 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":161,"skipped":2577,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:16.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0309 00:16:26.328857 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 00:16:26.328: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3034" for this suite. • [SLOW TEST:10.073 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":162,"skipped":2587,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:26.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-cc1f0d99-70c1-48b0-9eab-34ae70198d65 STEP: Creating a pod to test consume secrets Mar 9 00:16:26.425: INFO: Waiting up to 5m0s for pod "pod-secrets-fa737498-d350-4077-b366-247f0069a4eb" in namespace "secrets-8977" to be "success or failure" Mar 9 00:16:26.436: INFO: Pod "pod-secrets-fa737498-d350-4077-b366-247f0069a4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.079268ms Mar 9 00:16:28.440: INFO: Pod "pod-secrets-fa737498-d350-4077-b366-247f0069a4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015363811s Mar 9 00:16:30.444: INFO: Pod "pod-secrets-fa737498-d350-4077-b366-247f0069a4eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019197406s STEP: Saw pod success Mar 9 00:16:30.444: INFO: Pod "pod-secrets-fa737498-d350-4077-b366-247f0069a4eb" satisfied condition "success or failure" Mar 9 00:16:30.447: INFO: Trying to get logs from node latest-worker pod pod-secrets-fa737498-d350-4077-b366-247f0069a4eb container secret-volume-test: STEP: delete the pod Mar 9 00:16:30.516: INFO: Waiting for pod pod-secrets-fa737498-d350-4077-b366-247f0069a4eb to disappear Mar 9 00:16:30.525: INFO: Pod pod-secrets-fa737498-d350-4077-b366-247f0069a4eb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:30.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8977" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2599,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:30.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3757" for this suite. • [SLOW TEST:8.058 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":164,"skipped":2616,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:38.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:16:40.740: INFO: Waiting up to 5m0s for pod "client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b" in namespace "pods-9904" to be "success or failure" Mar 9 00:16:40.754: INFO: Pod "client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.869717ms Mar 9 00:16:42.757: INFO: Pod "client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016606068s STEP: Saw pod success Mar 9 00:16:42.757: INFO: Pod "client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b" satisfied condition "success or failure" Mar 9 00:16:42.759: INFO: Trying to get logs from node latest-worker pod client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b container env3cont: STEP: delete the pod Mar 9 00:16:42.796: INFO: Waiting for pod client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b to disappear Mar 9 00:16:42.800: INFO: Pod client-envvars-82e9c14b-a739-4c3c-8b2e-61ce65f3863b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9904" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":165,"skipped":2617,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:42.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 9 00:16:45.390: INFO: Successfully updated pod "adopt-release-fnwqc" STEP: Checking that the Job readopts the Pod Mar 9 00:16:45.390: INFO: Waiting up to 15m0s for pod "adopt-release-fnwqc" in namespace "job-9256" to be "adopted" Mar 9 00:16:45.397: INFO: Pod "adopt-release-fnwqc": Phase="Running", Reason="", readiness=true. Elapsed: 7.021696ms Mar 9 00:16:47.400: INFO: Pod "adopt-release-fnwqc": Phase="Running", Reason="", readiness=true. Elapsed: 2.010738395s Mar 9 00:16:47.400: INFO: Pod "adopt-release-fnwqc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 9 00:16:47.912: INFO: Successfully updated pod "adopt-release-fnwqc" STEP: Checking that the Job releases the Pod Mar 9 00:16:47.912: INFO: Waiting up to 15m0s for pod "adopt-release-fnwqc" in namespace "job-9256" to be "released" Mar 9 00:16:47.943: INFO: Pod "adopt-release-fnwqc": Phase="Running", Reason="", readiness=true. Elapsed: 30.555071ms Mar 9 00:16:47.943: INFO: Pod "adopt-release-fnwqc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:16:47.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9256" for this suite. • [SLOW TEST:5.210 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":166,"skipped":2621,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:16:48.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8740, will wait for the garbage collector to delete the pods Mar 9 00:16:50.170: INFO: Deleting Job.batch foo took: 6.451039ms Mar 9 00:16:50.271: INFO: Terminating Job.batch foo pods took: 100.249674ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:32.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8740" for this suite. • [SLOW TEST:44.571 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":167,"skipped":2628,"failed":0} [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:32.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:17:32.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3" in namespace "downward-api-5379" to be "success or failure" Mar 9 00:17:32.640: INFO: Pod "downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573296ms Mar 9 00:17:34.644: INFO: Pod "downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007409684s Mar 9 00:17:36.648: INFO: Pod "downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011505176s STEP: Saw pod success Mar 9 00:17:36.648: INFO: Pod "downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3" satisfied condition "success or failure" Mar 9 00:17:36.651: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3 container client-container: STEP: delete the pod Mar 9 00:17:36.672: INFO: Waiting for pod downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3 to disappear Mar 9 00:17:36.676: INFO: Pod downwardapi-volume-b5b60437-4b27-44e4-9c3d-193568486da3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:36.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5379" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2628,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:36.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-dfd94c0e-b060-494a-8fb9-521c06338723 STEP: Creating a pod to test consume secrets Mar 9 00:17:36.780: INFO: Waiting up to 5m0s for pod "pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8" in namespace "secrets-7020" to be "success or failure" Mar 9 00:17:36.797: INFO: Pod "pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.402528ms Mar 9 00:17:38.801: INFO: Pod "pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020558071s Mar 9 00:17:40.805: INFO: Pod "pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024480852s STEP: Saw pod success Mar 9 00:17:40.805: INFO: Pod "pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8" satisfied condition "success or failure" Mar 9 00:17:40.808: INFO: Trying to get logs from node latest-worker pod pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8 container secret-env-test: STEP: delete the pod Mar 9 00:17:40.828: INFO: Waiting for pod pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8 to disappear Mar 9 00:17:40.832: INFO: Pod pod-secrets-02ac5a2e-754c-4ce6-8bee-3a30ce07a4a8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:40.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7020" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":169,"skipped":2635,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:40.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:17:41.464: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:17:44.533: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:44.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-368" for this suite. STEP: Destroying namespace "webhook-368-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":170,"skipped":2655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:44.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:17:44.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437" in namespace "projected-7241" to be "success or failure" Mar 9 00:17:44.815: INFO: Pod "downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437": Phase="Pending", Reason="", readiness=false. Elapsed: 22.260613ms Mar 9 00:17:46.819: INFO: Pod "downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026402264s Mar 9 00:17:48.823: INFO: Pod "downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030775731s STEP: Saw pod success Mar 9 00:17:48.823: INFO: Pod "downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437" satisfied condition "success or failure" Mar 9 00:17:48.827: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437 container client-container: STEP: delete the pod Mar 9 00:17:48.847: INFO: Waiting for pod downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437 to disappear Mar 9 00:17:48.850: INFO: Pod downwardapi-volume-b18c2f11-d762-4383-a1fb-81128aaec437 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:48.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7241" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:48.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Mar 9 00:17:48.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions' Mar 9 00:17:49.202: INFO: stderr: "" Mar 9 00:17:49.202: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:17:49.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1927" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":172,"skipped":2708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:17:49.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-1149a859-4358-4218-87ae-e3e049132c6f in namespace container-probe-8269 Mar 9 00:17:51.280: INFO: Started pod busybox-1149a859-4358-4218-87ae-e3e049132c6f in namespace container-probe-8269 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 00:17:51.283: INFO: Initial restart count of pod busybox-1149a859-4358-4218-87ae-e3e049132c6f is 0 Mar 9 00:18:43.384: INFO: Restart count of pod container-probe-8269/busybox-1149a859-4358-4218-87ae-e3e049132c6f is now 1 (52.100920533s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:18:43.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8269" for this suite. • [SLOW TEST:54.222 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":173,"skipped":2747,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:18:43.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Mar 9 00:18:43.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1399' Mar 9 00:18:43.884: INFO: stderr: "" Mar 9 00:18:43.884: INFO: stdout: "pod/pause created\n" Mar 9 00:18:43.884: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 9 00:18:43.884: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1399" to be "running and ready" Mar 9 00:18:43.899: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.698198ms Mar 9 00:18:45.902: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018661094s Mar 9 00:18:47.917: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0336045s Mar 9 00:18:47.917: INFO: Pod "pause" satisfied condition "running and ready" Mar 9 00:18:47.918: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Mar 9 00:18:47.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1399' Mar 9 00:18:48.045: INFO: stderr: "" Mar 9 00:18:48.045: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 9 00:18:48.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1399' Mar 9 00:18:48.142: INFO: stderr: "" Mar 9 00:18:48.142: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 9 00:18:48.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1399' Mar 9 00:18:48.229: INFO: stderr: "" Mar 9 00:18:48.229: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 9 00:18:48.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1399' Mar 9 00:18:48.304: INFO: stderr: "" Mar 9 00:18:48.304: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Mar 9 00:18:48.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1399' Mar 9 00:18:48.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 00:18:48.402: INFO: stdout: "pod \"pause\" force deleted\n" Mar 9 00:18:48.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1399' Mar 9 00:18:48.516: INFO: stderr: "No resources found in kubectl-1399 namespace.\n" Mar 9 00:18:48.516: INFO: stdout: "" Mar 9 00:18:48.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1399 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 00:18:48.588: INFO: stderr: "" Mar 9 00:18:48.589: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:18:48.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1399" for this suite. • [SLOW TEST:5.208 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":174,"skipped":2753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:18:48.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:18:48.711: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 9 00:18:48.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:48.778: INFO: Number of nodes with available pods: 0 Mar 9 00:18:48.778: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:18:49.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:49.785: INFO: Number of nodes with available pods: 0 Mar 9 00:18:49.785: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:18:50.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:50.786: INFO: Number of nodes with available pods: 0 Mar 9 00:18:50.786: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:18:51.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:51.790: INFO: Number of nodes with available pods: 2 Mar 9 00:18:51.790: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 9 00:18:51.820: INFO: Wrong image for pod: daemon-set-pf6ck. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:51.820: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:51.864: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:52.882: INFO: Wrong image for pod: daemon-set-pf6ck. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:52.882: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:52.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:53.869: INFO: Wrong image for pod: daemon-set-pf6ck. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:53.869: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:53.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:54.868: INFO: Wrong image for pod: daemon-set-pf6ck. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:54.868: INFO: Pod daemon-set-pf6ck is not available Mar 9 00:18:54.868: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:54.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:55.868: INFO: Pod daemon-set-kgkrx is not available Mar 9 00:18:55.868: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:55.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:56.869: INFO: Pod daemon-set-kgkrx is not available Mar 9 00:18:56.869: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:56.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:57.869: INFO: Wrong image for pod: daemon-set-qqs2v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 00:18:57.869: INFO: Pod daemon-set-qqs2v is not available Mar 9 00:18:57.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:58.870: INFO: Pod daemon-set-rfvkj is not available Mar 9 00:18:58.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 9 00:18:58.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:58.879: INFO: Number of nodes with available pods: 1 Mar 9 00:18:58.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 9 00:18:59.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:18:59.887: INFO: Number of nodes with available pods: 1 Mar 9 00:18:59.887: INFO: Node latest-worker2 is running more than one daemon pod Mar 9 00:19:00.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:19:00.887: INFO: Number of nodes with available pods: 2 Mar 9 00:19:00.887: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9355, will wait for the garbage collector to delete the pods Mar 9 00:19:00.959: INFO: Deleting DaemonSet.extensions daemon-set took: 4.927813ms Mar 9 00:19:01.259: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.217778ms Mar 9 00:19:04.862: INFO: Number of nodes with available pods: 0 Mar 9 00:19:04.862: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 00:19:04.865: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9355/daemonsets","resourceVersion":"143425"},"items":null} Mar 9 00:19:04.867: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9355/pods","resourceVersion":"143425"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:04.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9355" for this suite. • [SLOW TEST:16.269 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":175,"skipped":2815,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:04.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-b70fde45-b064-4b5e-b425-ba468a69586c STEP: Creating a pod to test consume configMaps Mar 9 00:19:04.978: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f" in namespace "projected-3122" to be "success or failure" Mar 9 00:19:04.993: INFO: Pod "pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.075638ms Mar 9 00:19:06.997: INFO: Pod "pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019536921s Mar 9 00:19:09.001: INFO: Pod "pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02344937s STEP: Saw pod success Mar 9 00:19:09.001: INFO: Pod "pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f" satisfied condition "success or failure" Mar 9 00:19:09.005: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f container projected-configmap-volume-test: STEP: delete the pod Mar 9 00:19:09.025: INFO: Waiting for pod pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f to disappear Mar 9 00:19:09.029: INFO: Pod pod-projected-configmaps-f2e6d523-0086-46cc-881e-c8f2c5f8ea9f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:09.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3122" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":176,"skipped":2850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:09.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 9 00:19:11.661: INFO: Successfully updated pod "labelsupdate48af6437-7794-47d4-af26-5557d5724cf0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:13.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1082" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:13.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 9 00:19:13.775: INFO: Waiting up to 5m0s for pod "pod-6420a431-38d1-4d81-a651-c087d3b42a56" in namespace "emptydir-2256" to be "success or failure" Mar 9 00:19:13.784: INFO: Pod "pod-6420a431-38d1-4d81-a651-c087d3b42a56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.814277ms Mar 9 00:19:15.788: INFO: Pod "pod-6420a431-38d1-4d81-a651-c087d3b42a56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013069854s STEP: Saw pod success Mar 9 00:19:15.788: INFO: Pod "pod-6420a431-38d1-4d81-a651-c087d3b42a56" satisfied condition "success or failure" Mar 9 00:19:15.792: INFO: Trying to get logs from node latest-worker pod pod-6420a431-38d1-4d81-a651-c087d3b42a56 container test-container: STEP: delete the pod Mar 9 00:19:15.824: INFO: Waiting for pod pod-6420a431-38d1-4d81-a651-c087d3b42a56 to disappear Mar 9 00:19:15.910: INFO: Pod pod-6420a431-38d1-4d81-a651-c087d3b42a56 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:15.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2256" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":178,"skipped":2905,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:15.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:19:15.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32" in namespace "downward-api-7185" to be "success or failure" Mar 9 00:19:16.229: INFO: Pod "downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32": Phase="Pending", Reason="", readiness=false. Elapsed: 254.523242ms Mar 9 00:19:18.233: INFO: Pod "downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.258652431s STEP: Saw pod success Mar 9 00:19:18.233: INFO: Pod "downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32" satisfied condition "success or failure" Mar 9 00:19:18.237: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32 container client-container: STEP: delete the pod Mar 9 00:19:18.302: INFO: Waiting for pod downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32 to disappear Mar 9 00:19:18.313: INFO: Pod downwardapi-volume-1fbb1199-af38-4d7d-80dd-5537831cdc32 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:18.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7185" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2908,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:18.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:24.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1166" for this suite. STEP: Destroying namespace "nsdeletetest-5349" for this suite. Mar 9 00:19:24.652: INFO: Namespace nsdeletetest-5349 was already deleted STEP: Destroying namespace "nsdeletetest-1476" for this suite. • [SLOW TEST:6.336 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":180,"skipped":2908,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:24.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9897 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9897 I0309 00:19:24.768713 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9897, replica count: 2 I0309 00:19:27.819097 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 00:19:27.819: INFO: Creating new exec pod Mar 9 00:19:30.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-9897 execpod8jzgz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 9 00:19:31.056: INFO: stderr: "I0309 00:19:30.986933 2095 log.go:172] (0xc000a58630) (0xc0006bde00) Create stream\nI0309 00:19:30.986990 2095 log.go:172] (0xc000a58630) (0xc0006bde00) Stream added, broadcasting: 1\nI0309 00:19:30.989588 2095 log.go:172] (0xc000a58630) Reply frame received for 1\nI0309 00:19:30.989629 2095 log.go:172] (0xc000a58630) (0xc0006bdea0) Create stream\nI0309 00:19:30.989649 2095 log.go:172] (0xc000a58630) (0xc0006bdea0) Stream added, broadcasting: 3\nI0309 00:19:30.990577 2095 log.go:172] (0xc000a58630) Reply frame received for 3\nI0309 00:19:30.990606 2095 log.go:172] (0xc000a58630) (0xc0006526e0) Create stream\nI0309 00:19:30.990614 2095 log.go:172] (0xc000a58630) (0xc0006526e0) Stream added, broadcasting: 5\nI0309 00:19:30.991401 2095 log.go:172] (0xc000a58630) Reply frame received for 5\nI0309 00:19:31.049322 2095 log.go:172] (0xc000a58630) Data frame received for 5\nI0309 00:19:31.049352 2095 log.go:172] (0xc0006526e0) (5) Data frame handling\nI0309 00:19:31.049367 2095 log.go:172] (0xc0006526e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0309 00:19:31.050570 2095 log.go:172] (0xc000a58630) Data frame received for 5\nI0309 00:19:31.050595 2095 log.go:172] (0xc0006526e0) (5) Data frame handling\nI0309 00:19:31.050604 2095 log.go:172] (0xc0006526e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0309 00:19:31.050726 2095 log.go:172] (0xc000a58630) Data frame received for 3\nI0309 00:19:31.050737 2095 log.go:172] (0xc0006bdea0) (3) Data frame handling\nI0309 00:19:31.050881 2095 log.go:172] (0xc000a58630) Data frame received for 5\nI0309 00:19:31.050899 2095 log.go:172] (0xc0006526e0) (5) Data frame handling\nI0309 00:19:31.052444 2095 log.go:172] (0xc000a58630) Data frame received for 1\nI0309 00:19:31.052472 2095 log.go:172] (0xc0006bde00) (1) Data frame handling\nI0309 00:19:31.052488 2095 log.go:172] (0xc0006bde00) (1) Data frame sent\nI0309 00:19:31.052515 2095 log.go:172] (0xc000a58630) (0xc0006bde00) Stream removed, broadcasting: 1\nI0309 00:19:31.052544 2095 log.go:172] (0xc000a58630) Go away received\nI0309 00:19:31.052869 2095 log.go:172] (0xc000a58630) (0xc0006bde00) Stream removed, broadcasting: 1\nI0309 00:19:31.052893 2095 log.go:172] (0xc000a58630) (0xc0006bdea0) Stream removed, broadcasting: 3\nI0309 00:19:31.052908 2095 log.go:172] (0xc000a58630) (0xc0006526e0) Stream removed, broadcasting: 5\n" Mar 9 00:19:31.056: INFO: stdout: "" Mar 9 00:19:31.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-9897 execpod8jzgz -- /bin/sh -x -c nc -zv -t -w 2 10.96.115.53 80' Mar 9 00:19:31.247: INFO: stderr: "I0309 00:19:31.185459 2115 log.go:172] (0xc00054aa50) (0xc0009fa140) Create stream\nI0309 00:19:31.185506 2115 log.go:172] (0xc00054aa50) (0xc0009fa140) Stream added, broadcasting: 1\nI0309 00:19:31.188269 2115 log.go:172] (0xc00054aa50) Reply frame received for 1\nI0309 00:19:31.188309 2115 log.go:172] (0xc00054aa50) (0xc0003e9400) Create stream\nI0309 00:19:31.188319 2115 log.go:172] (0xc00054aa50) (0xc0003e9400) Stream added, broadcasting: 3\nI0309 00:19:31.189251 2115 log.go:172] (0xc00054aa50) Reply frame received for 3\nI0309 00:19:31.189284 2115 log.go:172] (0xc00054aa50) (0xc0009fa1e0) Create stream\nI0309 00:19:31.189309 2115 log.go:172] (0xc00054aa50) (0xc0009fa1e0) Stream added, broadcasting: 5\nI0309 00:19:31.190356 2115 log.go:172] (0xc00054aa50) Reply frame received for 5\nI0309 00:19:31.243801 2115 log.go:172] (0xc00054aa50) Data frame received for 3\nI0309 00:19:31.243830 2115 log.go:172] (0xc0003e9400) (3) Data frame handling\nI0309 00:19:31.243853 2115 log.go:172] (0xc00054aa50) Data frame received for 5\nI0309 00:19:31.243863 2115 log.go:172] (0xc0009fa1e0) (5) Data frame handling\nI0309 00:19:31.243874 2115 log.go:172] (0xc0009fa1e0) (5) Data frame sent\nI0309 00:19:31.243883 2115 log.go:172] (0xc00054aa50) Data frame received for 5\nI0309 00:19:31.243891 2115 log.go:172] (0xc0009fa1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.115.53 80\nConnection to 10.96.115.53 80 port [tcp/http] succeeded!\nI0309 00:19:31.244730 2115 log.go:172] (0xc00054aa50) Data frame received for 1\nI0309 00:19:31.244751 2115 log.go:172] (0xc0009fa140) (1) Data frame handling\nI0309 00:19:31.244760 2115 log.go:172] (0xc0009fa140) (1) Data frame sent\nI0309 00:19:31.244768 2115 log.go:172] (0xc00054aa50) (0xc0009fa140) Stream removed, broadcasting: 1\nI0309 00:19:31.244780 2115 log.go:172] (0xc00054aa50) Go away received\nI0309 00:19:31.245075 2115 log.go:172] (0xc00054aa50) (0xc0009fa140) Stream removed, broadcasting: 1\nI0309 00:19:31.245089 2115 log.go:172] (0xc00054aa50) (0xc0003e9400) Stream removed, broadcasting: 3\nI0309 00:19:31.245095 2115 log.go:172] (0xc00054aa50) (0xc0009fa1e0) Stream removed, broadcasting: 5\n" Mar 9 00:19:31.247: INFO: stdout: "" Mar 9 00:19:31.247: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:31.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9897" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.625 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":181,"skipped":2915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:31.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:19:31.391: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:19:33.394: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:35.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:37.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:39.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:41.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:43.397: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:45.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:47.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:49.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:51.415: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:53.397: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = false) Mar 9 00:19:55.395: INFO: The status of Pod test-webserver-d3b3fd35-be8e-4ed0-abb0-55fc913c4928 is Running (Ready = true) Mar 9 00:19:55.398: INFO: Container started at 2020-03-09 00:19:32 +0000 UTC, pod became ready at 2020-03-09 00:19:54 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:19:55.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8727" for this suite. • [SLOW TEST:24.126 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":182,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:19:55.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:19:55.948: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:19:59.002: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:19:59.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6792-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:00.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9080" for this suite. STEP: Destroying namespace "webhook-9080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":183,"skipped":2994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:00.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-ad10cd56-1b73-4109-9953-497eeebd8c47 STEP: Creating a pod to test consume configMaps Mar 9 00:20:00.294: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82" in namespace "projected-4410" to be "success or failure" Mar 9 00:20:00.308: INFO: Pod "pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82": Phase="Pending", Reason="", readiness=false. Elapsed: 14.183054ms Mar 9 00:20:02.312: INFO: Pod "pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01807365s STEP: Saw pod success Mar 9 00:20:02.312: INFO: Pod "pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82" satisfied condition "success or failure" Mar 9 00:20:02.314: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82 container projected-configmap-volume-test: STEP: delete the pod Mar 9 00:20:02.334: INFO: Waiting for pod pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82 to disappear Mar 9 00:20:02.366: INFO: Pod pod-projected-configmaps-491f508e-0eed-4e72-b0ed-c06118c7ac82 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:02.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4410" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":3019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:02.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-8b047aa4-53c3-402d-8d18-b0cd5df8d21c STEP: Creating a pod to test consume secrets Mar 9 00:20:02.444: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30" in namespace "projected-1560" to be "success or failure" Mar 9 00:20:02.450: INFO: Pod "pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227436ms Mar 9 00:20:04.454: INFO: Pod "pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010069562s STEP: Saw pod success Mar 9 00:20:04.454: INFO: Pod "pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30" satisfied condition "success or failure" Mar 9 00:20:04.456: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30 container projected-secret-volume-test: STEP: delete the pod Mar 9 00:20:04.486: INFO: Waiting for pod pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30 to disappear Mar 9 00:20:04.499: INFO: Pod pod-projected-secrets-5fe6c0d0-81be-43a6-b9d2-00876c4e4e30 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:04.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1560" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":185,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:04.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Mar 9 00:20:04.554: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2049" to be "success or failure" Mar 9 00:20:04.558: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133256ms Mar 9 00:20:06.562: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.008262533s Mar 9 00:20:08.566: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012397338s STEP: Saw pod success Mar 9 00:20:08.566: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 9 00:20:08.569: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 9 00:20:08.595: INFO: Waiting for pod pod-host-path-test to disappear Mar 9 00:20:08.600: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:08.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2049" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":186,"skipped":3072,"failed":0} ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:08.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 9 00:20:08.714: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 9 00:20:13.717: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:13.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-346" for this suite. • [SLOW TEST:5.234 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":187,"skipped":3072,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:13.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 00:20:16.030: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:16.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7664" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":3076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:16.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 9 00:20:16.122: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:32.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2232" for this suite. • [SLOW TEST:16.048 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":189,"skipped":3101,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:32.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-84bdc012-95d5-46c9-a3cb-4c04a0183903 STEP: Creating a pod to test consume secrets Mar 9 00:20:32.225: INFO: Waiting up to 5m0s for pod "pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df" in namespace "secrets-6205" to be "success or failure" Mar 9 00:20:32.230: INFO: Pod "pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311517ms Mar 9 00:20:34.296: INFO: Pod "pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070210943s Mar 9 00:20:36.299: INFO: Pod "pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073758487s STEP: Saw pod success Mar 9 00:20:36.299: INFO: Pod "pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df" satisfied condition "success or failure" Mar 9 00:20:36.302: INFO: Trying to get logs from node latest-worker pod pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df container secret-volume-test: STEP: delete the pod Mar 9 00:20:36.345: INFO: Waiting for pod pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df to disappear Mar 9 00:20:36.351: INFO: Pod pod-secrets-08bc4dd4-6355-4dcc-afdb-9e35371608df no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:36.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6205" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":3122,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:36.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:20:36.944: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:20:39.992: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:52.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6617" for this suite. STEP: Destroying namespace "webhook-6617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.926 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":191,"skipped":3131,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:52.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:20:52.794: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:20:55.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:56.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8465" for this suite. STEP: Destroying namespace "webhook-8465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":192,"skipped":3137,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:56.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 9 00:20:56.317: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 00:20:56.324: INFO: Waiting for terminating namespaces to be deleted... Mar 9 00:20:56.326: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 9 00:20:56.329: INFO: sample-webhook-deployment-5f65f8c764-vxb25 from webhook-8465 started at 2020-03-09 00:20:52 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.329: INFO: Container sample-webhook ready: true, restart count 0 Mar 9 00:20:56.329: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.329: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 00:20:56.329: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.329: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:20:56.329: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 9 00:20:56.333: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.333: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 00:20:56.333: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.333: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:20:56.333: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 9 00:20:56.333: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 9 00:20:56.376: INFO: Pod coredns-6955765f44-cgshp requesting resource cpu=100m on Node latest-worker2 Mar 9 00:20:56.376: INFO: Pod kindnet-2j5xm requesting resource cpu=100m on Node latest-worker Mar 9 00:20:56.376: INFO: Pod kindnet-spz5f requesting resource cpu=100m on Node latest-worker2 Mar 9 00:20:56.376: INFO: Pod kube-proxy-9jc24 requesting resource cpu=0m on Node latest-worker Mar 9 00:20:56.376: INFO: Pod kube-proxy-cx5xz requesting resource cpu=0m on Node latest-worker2 Mar 9 00:20:56.377: INFO: Pod sample-webhook-deployment-5f65f8c764-vxb25 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 9 00:20:56.377: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 9 00:20:56.381: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-04f5e885-de03-4202-963b-cf65423986b8.15fa7ad0db7ee032], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5310/filler-pod-04f5e885-de03-4202-963b-cf65423986b8 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-04f5e885-de03-4202-963b-cf65423986b8.15fa7ad10d9781f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-04f5e885-de03-4202-963b-cf65423986b8.15fa7ad11f14c1c2], Reason = [Created], Message = [Created container filler-pod-04f5e885-de03-4202-963b-cf65423986b8] STEP: Considering event: Type = [Normal], Name = [filler-pod-04f5e885-de03-4202-963b-cf65423986b8.15fa7ad12b25aa4c], Reason = [Started], Message = [Started container filler-pod-04f5e885-de03-4202-963b-cf65423986b8] STEP: Considering event: Type = [Normal], Name = [filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c.15fa7ad0d906b0bc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5310/filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c.15fa7ad10669181b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c.15fa7ad1178927da], Reason = [Created], Message = [Created container filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c] STEP: Considering event: Type = [Normal], Name = [filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c.15fa7ad125c73273], Reason = [Started], Message = [Started container filler-pod-8fb85800-e25d-404a-83a9-b0dd010ea29c] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa7ad153ca6f2d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:20:59.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5310" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":193,"skipped":3142,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:20:59.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 9 00:21:02.705: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:03.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8562" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":194,"skipped":3152,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:03.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-07be54f9-b8a4-448e-b46f-8ae1de3900c2 STEP: Creating secret with name s-test-opt-upd-2bcb0bce-8d34-48d3-9138-a6e1fcfecf31 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-07be54f9-b8a4-448e-b46f-8ae1de3900c2 STEP: Updating secret s-test-opt-upd-2bcb0bce-8d34-48d3-9138-a6e1fcfecf31 STEP: Creating secret with name s-test-opt-create-a3c7a3f4-f646-421e-a332-b5faaaa5939e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:11.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1058" for this suite. • [SLOW TEST:8.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3164,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:11.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 9 00:21:18.167: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 00:21:18.206: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 00:21:20.206: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 00:21:20.210: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 00:21:22.206: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 00:21:22.210: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 00:21:24.206: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 00:21:24.210: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:24.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7834" for this suite. • [SLOW TEST:12.219 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3172,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:24.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 9 00:21:28.836: INFO: Successfully updated pod "pod-update-21902ad2-9b49-4fd4-90d1-b64e2f41563e" STEP: verifying the updated pod is in kubernetes Mar 9 00:21:28.858: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:28.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2160" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":197,"skipped":3176,"failed":0} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:28.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Mar 9 00:21:32.961: INFO: Pod pod-hostip-eb054e49-f361-4202-841d-ec5d88914287 has hostIP: 172.17.0.18 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:32.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8706" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":198,"skipped":3177,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:32.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-06c95ece-c7a3-4fdd-8cb5-250b8eb67ff8 STEP: Creating a pod to test consume configMaps Mar 9 00:21:33.121: INFO: Waiting up to 5m0s for pod "pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add" in namespace "configmap-676" to be "success or failure" Mar 9 00:21:33.153: INFO: Pod "pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add": Phase="Pending", Reason="", readiness=false. Elapsed: 32.340027ms Mar 9 00:21:35.174: INFO: Pod "pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052519553s STEP: Saw pod success Mar 9 00:21:35.174: INFO: Pod "pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add" satisfied condition "success or failure" Mar 9 00:21:35.177: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add container configmap-volume-test: STEP: delete the pod Mar 9 00:21:35.192: INFO: Waiting for pod pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add to disappear Mar 9 00:21:35.196: INFO: Pod pod-configmaps-c62fbf88-ea82-4cba-b808-46c2a7431add no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:35.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-676" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:35.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:41.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8572" for this suite. • [SLOW TEST:6.055 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":200,"skipped":3201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:41.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-2191b483-1511-4320-bd92-5bf609c643f6 STEP: Creating secret with name secret-projected-all-test-volume-d94992eb-8138-4a83-8929-f8f735a1534e STEP: Creating a pod to test Check all projections for projected volume plugin Mar 9 00:21:41.312: INFO: Waiting up to 5m0s for pod "projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961" in namespace "projected-273" to be "success or failure" Mar 9 00:21:41.327: INFO: Pod "projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332041ms Mar 9 00:21:43.330: INFO: Pod "projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018241069s STEP: Saw pod success Mar 9 00:21:43.330: INFO: Pod "projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961" satisfied condition "success or failure" Mar 9 00:21:43.332: INFO: Trying to get logs from node latest-worker2 pod projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961 container projected-all-volume-test: STEP: delete the pod Mar 9 00:21:43.348: INFO: Waiting for pod projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961 to disappear Mar 9 00:21:43.352: INFO: Pod projected-volume-e46f367c-20a9-473d-a11f-ede3ddfe4961 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:43.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-273" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:43.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:21:43.437: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 30.190652ms) Mar 9 00:21:43.440: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.244888ms) Mar 9 00:21:43.443: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.044166ms) Mar 9 00:21:43.446: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.936395ms) Mar 9 00:21:43.449: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.746947ms) Mar 9 00:21:43.452: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.09608ms) Mar 9 00:21:43.455: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.852872ms) Mar 9 00:21:43.458: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.710411ms) Mar 9 00:21:43.461: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.851061ms) Mar 9 00:21:43.463: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.720736ms) Mar 9 00:21:43.466: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.611265ms) Mar 9 00:21:43.469: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.409855ms) Mar 9 00:21:43.471: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.910801ms) Mar 9 00:21:43.474: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.717796ms) Mar 9 00:21:43.478: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.525228ms) Mar 9 00:21:43.486: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 7.87141ms) Mar 9 00:21:43.490: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.564325ms) Mar 9 00:21:43.494: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.444387ms) Mar 9 00:21:43.509: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.094233ms) Mar 9 00:21:43.512: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.003455ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:43.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4376" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":280,"completed":202,"skipped":3279,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:43.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 9 00:21:43.589: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:47.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1723" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":203,"skipped":3281,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:47.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 9 00:21:50.062: INFO: Successfully updated pod "annotationupdatea2550f57-e79a-4e38-8296-eef68883ea96" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:52.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-538" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3297,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:52.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 9 00:21:52.135: INFO: Waiting up to 5m0s for pod "pod-c09a182b-27bc-420a-98fc-84d8864cebcb" in namespace "emptydir-2780" to be "success or failure" Mar 9 00:21:52.138: INFO: Pod "pod-c09a182b-27bc-420a-98fc-84d8864cebcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730092ms Mar 9 00:21:54.142: INFO: Pod "pod-c09a182b-27bc-420a-98fc-84d8864cebcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006894694s Mar 9 00:21:56.146: INFO: Pod "pod-c09a182b-27bc-420a-98fc-84d8864cebcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010540558s STEP: Saw pod success Mar 9 00:21:56.146: INFO: Pod "pod-c09a182b-27bc-420a-98fc-84d8864cebcb" satisfied condition "success or failure" Mar 9 00:21:56.149: INFO: Trying to get logs from node latest-worker2 pod pod-c09a182b-27bc-420a-98fc-84d8864cebcb container test-container: STEP: delete the pod Mar 9 00:21:56.170: INFO: Waiting for pod pod-c09a182b-27bc-420a-98fc-84d8864cebcb to disappear Mar 9 00:21:56.174: INFO: Pod pod-c09a182b-27bc-420a-98fc-84d8864cebcb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:56.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2780" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3315,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:56.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:21:56.278: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-61263c18-8b78-42dd-af03-832d6180c21a" in namespace "security-context-test-739" to be "success or failure" Mar 9 00:21:56.295: INFO: Pod "busybox-privileged-false-61263c18-8b78-42dd-af03-832d6180c21a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.838027ms Mar 9 00:21:58.472: INFO: Pod "busybox-privileged-false-61263c18-8b78-42dd-af03-832d6180c21a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194220625s Mar 9 00:21:58.472: INFO: Pod "busybox-privileged-false-61263c18-8b78-42dd-af03-832d6180c21a" satisfied condition "success or failure" Mar 9 00:21:58.478: INFO: Got logs for pod "busybox-privileged-false-61263c18-8b78-42dd-af03-832d6180c21a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:21:58.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-739" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":206,"skipped":3317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:21:58.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 00:22:00.631: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:00.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2300" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3345,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:00.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 9 00:22:00.768: INFO: Waiting up to 5m0s for pod "pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d" in namespace "emptydir-2847" to be "success or failure" Mar 9 00:22:00.772: INFO: Pod "pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208227ms Mar 9 00:22:02.774: INFO: Pod "pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006739167s STEP: Saw pod success Mar 9 00:22:02.775: INFO: Pod "pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d" satisfied condition "success or failure" Mar 9 00:22:02.776: INFO: Trying to get logs from node latest-worker pod pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d container test-container: STEP: delete the pod Mar 9 00:22:02.802: INFO: Waiting for pod pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d to disappear Mar 9 00:22:02.809: INFO: Pod pod-435ed1ae-9e8b-46d7-ba1a-ebd09f57846d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:02.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2847" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":208,"skipped":3352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:02.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0309 00:22:03.987822 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 00:22:03.987: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:03.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1046" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":209,"skipped":3375,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:03.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:22:04.923: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:22:07.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:07.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5897" for this suite. STEP: Destroying namespace "webhook-5897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":210,"skipped":3388,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:08.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:08.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3319" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":211,"skipped":3390,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:08.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 9 00:22:08.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1040' Mar 9 00:22:08.819: INFO: stderr: "" Mar 9 00:22:08.820: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:22:08.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1040' Mar 9 00:22:08.958: INFO: stderr: "" Mar 9 00:22:08.958: INFO: stdout: "update-demo-nautilus-2tkxg update-demo-nautilus-m8hlv " Mar 9 00:22:08.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2tkxg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040' Mar 9 00:22:09.077: INFO: stderr: "" Mar 9 00:22:09.078: INFO: stdout: "" Mar 9 00:22:09.078: INFO: update-demo-nautilus-2tkxg is created but not running Mar 9 00:22:14.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1040' Mar 9 00:22:15.856: INFO: stderr: "" Mar 9 00:22:15.856: INFO: stdout: "update-demo-nautilus-2tkxg update-demo-nautilus-m8hlv " Mar 9 00:22:15.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2tkxg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040' Mar 9 00:22:15.976: INFO: stderr: "" Mar 9 00:22:15.976: INFO: stdout: "true" Mar 9 00:22:15.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2tkxg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1040' Mar 9 00:22:16.064: INFO: stderr: "" Mar 9 00:22:16.064: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:22:16.064: INFO: validating pod update-demo-nautilus-2tkxg Mar 9 00:22:16.068: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:22:16.068: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:22:16.068: INFO: update-demo-nautilus-2tkxg is verified up and running Mar 9 00:22:16.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m8hlv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040' Mar 9 00:22:16.143: INFO: stderr: "" Mar 9 00:22:16.143: INFO: stdout: "true" Mar 9 00:22:16.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m8hlv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1040' Mar 9 00:22:16.209: INFO: stderr: "" Mar 9 00:22:16.209: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:22:16.209: INFO: validating pod update-demo-nautilus-m8hlv Mar 9 00:22:16.212: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:22:16.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:22:16.212: INFO: update-demo-nautilus-m8hlv is verified up and running STEP: using delete to clean up resources Mar 9 00:22:16.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1040' Mar 9 00:22:16.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 00:22:16.278: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 9 00:22:16.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1040' Mar 9 00:22:16.355: INFO: stderr: "No resources found in kubectl-1040 namespace.\n" Mar 9 00:22:16.355: INFO: stdout: "" Mar 9 00:22:16.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1040 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 00:22:16.425: INFO: stderr: "" Mar 9 00:22:16.425: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:16.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1040" for this suite. • [SLOW TEST:8.052 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":212,"skipped":3402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:16.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-2976 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 00:22:16.505: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 9 00:22:16.557: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:22:18.579: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:20.561: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:22.561: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:24.561: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:26.561: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:28.561: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:22:30.561: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 9 00:22:30.567: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 9 00:22:32.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:8080/dial?request=hostname&protocol=http&host=10.244.1.174&port=8080&tries=1'] Namespace:pod-network-test-2976 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:22:32.585: INFO: >>> kubeConfig: /root/.kube/config I0309 00:22:32.617977 7 log.go:172] (0xc001deb130) (0xc002a08280) Create stream I0309 00:22:32.618007 7 log.go:172] (0xc001deb130) (0xc002a08280) Stream added, broadcasting: 1 I0309 00:22:32.621480 7 log.go:172] (0xc001deb130) Reply frame received for 1 I0309 00:22:32.621534 7 log.go:172] (0xc001deb130) (0xc002b13b80) Create stream I0309 00:22:32.621553 7 log.go:172] (0xc001deb130) (0xc002b13b80) Stream added, broadcasting: 3 I0309 00:22:32.623382 7 log.go:172] (0xc001deb130) Reply frame received for 3 I0309 00:22:32.623450 7 log.go:172] (0xc001deb130) (0xc001b570e0) Create stream I0309 00:22:32.623469 7 log.go:172] (0xc001deb130) (0xc001b570e0) Stream added, broadcasting: 5 I0309 00:22:32.624481 7 log.go:172] (0xc001deb130) Reply frame received for 5 I0309 00:22:32.686818 7 log.go:172] (0xc001deb130) Data frame received for 3 I0309 00:22:32.686838 7 log.go:172] (0xc002b13b80) (3) Data frame handling I0309 00:22:32.686853 7 log.go:172] (0xc002b13b80) (3) Data frame sent I0309 00:22:32.687280 7 log.go:172] (0xc001deb130) Data frame received for 5 I0309 00:22:32.687310 7 log.go:172] (0xc001b570e0) (5) Data frame handling I0309 00:22:32.687334 7 log.go:172] (0xc001deb130) Data frame received for 3 I0309 00:22:32.687349 7 log.go:172] (0xc002b13b80) (3) Data frame handling I0309 00:22:32.688679 7 log.go:172] (0xc001deb130) Data frame received for 1 I0309 00:22:32.688698 7 log.go:172] (0xc002a08280) (1) Data frame handling I0309 00:22:32.688707 7 log.go:172] (0xc002a08280) (1) Data frame sent I0309 00:22:32.688721 7 log.go:172] (0xc001deb130) (0xc002a08280) Stream removed, broadcasting: 1 I0309 00:22:32.688741 7 log.go:172] (0xc001deb130) Go away received I0309 00:22:32.688854 7 log.go:172] (0xc001deb130) (0xc002a08280) Stream removed, broadcasting: 1 I0309 00:22:32.688879 7 log.go:172] (0xc001deb130) (0xc002b13b80) Stream removed, broadcasting: 3 I0309 00:22:32.688897 7 log.go:172] (0xc001deb130) (0xc001b570e0) Stream removed, broadcasting: 5 Mar 9 00:22:32.688: INFO: Waiting for responses: map[] Mar 9 00:22:32.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:8080/dial?request=hostname&protocol=http&host=10.244.2.125&port=8080&tries=1'] Namespace:pod-network-test-2976 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:22:32.692: INFO: >>> kubeConfig: /root/.kube/config I0309 00:22:32.721913 7 log.go:172] (0xc0023dadc0) (0xc001b575e0) Create stream I0309 00:22:32.721935 7 log.go:172] (0xc0023dadc0) (0xc001b575e0) Stream added, broadcasting: 1 I0309 00:22:32.724026 7 log.go:172] (0xc0023dadc0) Reply frame received for 1 I0309 00:22:32.724072 7 log.go:172] (0xc0023dadc0) (0xc0011bbc20) Create stream I0309 00:22:32.724092 7 log.go:172] (0xc0023dadc0) (0xc0011bbc20) Stream added, broadcasting: 3 I0309 00:22:32.724955 7 log.go:172] (0xc0023dadc0) Reply frame received for 3 I0309 00:22:32.725011 7 log.go:172] (0xc0023dadc0) (0xc002b13d60) Create stream I0309 00:22:32.725028 7 log.go:172] (0xc0023dadc0) (0xc002b13d60) Stream added, broadcasting: 5 I0309 00:22:32.726014 7 log.go:172] (0xc0023dadc0) Reply frame received for 5 I0309 00:22:32.811203 7 log.go:172] (0xc0023dadc0) Data frame received for 3 I0309 00:22:32.811225 7 log.go:172] (0xc0011bbc20) (3) Data frame handling I0309 00:22:32.811256 7 log.go:172] (0xc0011bbc20) (3) Data frame sent I0309 00:22:32.811716 7 log.go:172] (0xc0023dadc0) Data frame received for 3 I0309 00:22:32.811737 7 log.go:172] (0xc0011bbc20) (3) Data frame handling I0309 00:22:32.811797 7 log.go:172] (0xc0023dadc0) Data frame received for 5 I0309 00:22:32.811820 7 log.go:172] (0xc002b13d60) (5) Data frame handling I0309 00:22:32.813543 7 log.go:172] (0xc0023dadc0) Data frame received for 1 I0309 00:22:32.813565 7 log.go:172] (0xc001b575e0) (1) Data frame handling I0309 00:22:32.813583 7 log.go:172] (0xc001b575e0) (1) Data frame sent I0309 00:22:32.813601 7 log.go:172] (0xc0023dadc0) (0xc001b575e0) Stream removed, broadcasting: 1 I0309 00:22:32.813620 7 log.go:172] (0xc0023dadc0) Go away received I0309 00:22:32.813744 7 log.go:172] (0xc0023dadc0) (0xc001b575e0) Stream removed, broadcasting: 1 I0309 00:22:32.813770 7 log.go:172] (0xc0023dadc0) (0xc0011bbc20) Stream removed, broadcasting: 3 I0309 00:22:32.813778 7 log.go:172] (0xc0023dadc0) (0xc002b13d60) Stream removed, broadcasting: 5 Mar 9 00:22:32.813: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:32.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2976" for this suite. • [SLOW TEST:16.390 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":213,"skipped":3497,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:32.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:22:32.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655" in namespace "projected-4872" to be "success or failure" Mar 9 00:22:32.922: INFO: Pod "downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655": Phase="Pending", Reason="", readiness=false. Elapsed: 18.960753ms Mar 9 00:22:34.926: INFO: Pod "downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023134735s STEP: Saw pod success Mar 9 00:22:34.926: INFO: Pod "downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655" satisfied condition "success or failure" Mar 9 00:22:34.930: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655 container client-container: STEP: delete the pod Mar 9 00:22:34.944: INFO: Waiting for pod downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655 to disappear Mar 9 00:22:34.959: INFO: Pod downwardapi-volume-c7dbaaff-91cc-4376-8458-379c9b473655 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:34.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4872" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3515,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:35.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:22:35.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf" in namespace "downward-api-9008" to be "success or failure" Mar 9 00:22:35.074: INFO: Pod "downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537319ms Mar 9 00:22:37.078: INFO: Pod "downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010680021s STEP: Saw pod success Mar 9 00:22:37.078: INFO: Pod "downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf" satisfied condition "success or failure" Mar 9 00:22:37.081: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf container client-container: STEP: delete the pod Mar 9 00:22:37.105: INFO: Waiting for pod downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf to disappear Mar 9 00:22:37.125: INFO: Pod downwardapi-volume-87501815-0b76-460e-bf04-c7d16e495baf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:22:37.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9008" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3524,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:22:37.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:22:37.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 9 00:22:37.804: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:37Z generation:1 name:name1 resourceVersion:145593 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:53173268-7753-496f-9ea7-e2f959d25d41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 9 00:22:47.810: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:47Z generation:1 name:name2 resourceVersion:145670 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ce0d4c1c-bda8-4350-9a7b-974731814fff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 9 00:22:57.815: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:37Z generation:2 name:name1 resourceVersion:145700 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:53173268-7753-496f-9ea7-e2f959d25d41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 9 00:23:07.820: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:47Z generation:2 name:name2 resourceVersion:145731 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ce0d4c1c-bda8-4350-9a7b-974731814fff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 9 00:23:17.827: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:37Z generation:2 name:name1 resourceVersion:145761 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:53173268-7753-496f-9ea7-e2f959d25d41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 9 00:23:27.834: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T00:22:47Z generation:2 name:name2 resourceVersion:145791 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ce0d4c1c-bda8-4350-9a7b-974731814fff] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:23:38.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7689" for this suite. • [SLOW TEST:61.217 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":216,"skipped":3526,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:23:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:23:38.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af" in namespace "downward-api-8145" to be "success or failure" Mar 9 00:23:38.457: INFO: Pod "downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371121ms Mar 9 00:23:40.461: INFO: Pod "downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008464573s Mar 9 00:23:42.464: INFO: Pod "downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011914842s STEP: Saw pod success Mar 9 00:23:42.464: INFO: Pod "downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af" satisfied condition "success or failure" Mar 9 00:23:42.467: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af container client-container: STEP: delete the pod Mar 9 00:23:42.501: INFO: Waiting for pod downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af to disappear Mar 9 00:23:42.521: INFO: Pod downwardapi-volume-09bb62ad-d93d-44be-8903-fd488a6d90af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:23:42.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8145" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3526,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:23:42.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1449 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1449 STEP: creating replication controller externalsvc in namespace services-1449 I0309 00:23:42.673781 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1449, replica count: 2 I0309 00:23:45.724170 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 9 00:23:45.751: INFO: Creating new exec pod Mar 9 00:23:47.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1449 execpodbc9sd -- /bin/sh -x -c nslookup clusterip-service' Mar 9 00:23:47.966: INFO: stderr: "I0309 00:23:47.883620 2367 log.go:172] (0xc00003adc0) (0xc0006a5ae0) Create stream\nI0309 00:23:47.883671 2367 log.go:172] (0xc00003adc0) (0xc0006a5ae0) Stream added, broadcasting: 1\nI0309 00:23:47.885755 2367 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0309 00:23:47.885785 2367 log.go:172] (0xc00003adc0) (0xc0008d4000) Create stream\nI0309 00:23:47.885795 2367 log.go:172] (0xc00003adc0) (0xc0008d4000) Stream added, broadcasting: 3\nI0309 00:23:47.886457 2367 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0309 00:23:47.886483 2367 log.go:172] (0xc00003adc0) (0xc000216000) Create stream\nI0309 00:23:47.886493 2367 log.go:172] (0xc00003adc0) (0xc000216000) Stream added, broadcasting: 5\nI0309 00:23:47.887168 2367 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0309 00:23:47.956247 2367 log.go:172] (0xc00003adc0) Data frame received for 5\nI0309 00:23:47.956275 2367 log.go:172] (0xc000216000) (5) Data frame handling\nI0309 00:23:47.956292 2367 log.go:172] (0xc000216000) (5) Data frame sent\n+ nslookup clusterip-service\nI0309 00:23:47.961112 2367 log.go:172] (0xc00003adc0) Data frame received for 3\nI0309 00:23:47.961137 2367 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0309 00:23:47.961156 2367 log.go:172] (0xc0008d4000) (3) Data frame sent\nI0309 00:23:47.962209 2367 log.go:172] (0xc00003adc0) Data frame received for 3\nI0309 00:23:47.962243 2367 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0309 00:23:47.962261 2367 log.go:172] (0xc0008d4000) (3) Data frame sent\nI0309 00:23:47.962335 2367 log.go:172] (0xc00003adc0) Data frame received for 3\nI0309 00:23:47.962369 2367 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0309 00:23:47.962557 2367 log.go:172] (0xc00003adc0) Data frame received for 5\nI0309 00:23:47.962579 2367 log.go:172] (0xc000216000) (5) Data frame handling\nI0309 00:23:47.963957 2367 log.go:172] (0xc00003adc0) Data frame received for 1\nI0309 00:23:47.963972 2367 log.go:172] (0xc0006a5ae0) (1) Data frame handling\nI0309 00:23:47.963987 2367 log.go:172] (0xc0006a5ae0) (1) Data frame sent\nI0309 00:23:47.964003 2367 log.go:172] (0xc00003adc0) (0xc0006a5ae0) Stream removed, broadcasting: 1\nI0309 00:23:47.964016 2367 log.go:172] (0xc00003adc0) Go away received\nI0309 00:23:47.964327 2367 log.go:172] (0xc00003adc0) (0xc0006a5ae0) Stream removed, broadcasting: 1\nI0309 00:23:47.964344 2367 log.go:172] (0xc00003adc0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0309 00:23:47.964353 2367 log.go:172] (0xc00003adc0) (0xc000216000) Stream removed, broadcasting: 5\n" Mar 9 00:23:47.967: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1449.svc.cluster.local\tcanonical name = externalsvc.services-1449.svc.cluster.local.\nName:\texternalsvc.services-1449.svc.cluster.local\nAddress: 10.96.27.175\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1449, will wait for the garbage collector to delete the pods Mar 9 00:23:48.024: INFO: Deleting ReplicationController externalsvc took: 4.354392ms Mar 9 00:23:48.324: INFO: Terminating ReplicationController externalsvc pods took: 300.235277ms Mar 9 00:24:02.538: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:24:02.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1449" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.079 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":218,"skipped":3526,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:24:02.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 9 00:24:02.657: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:24:05.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6369" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":219,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:24:05.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 9 00:24:05.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4835' Mar 9 00:24:06.141: INFO: stderr: "" Mar 9 00:24:06.141: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:24:06.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:06.274: INFO: stderr: "" Mar 9 00:24:06.274: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-zxz7r " Mar 9 00:24:06.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:06.342: INFO: stderr: "" Mar 9 00:24:06.342: INFO: stdout: "" Mar 9 00:24:06.342: INFO: update-demo-nautilus-cxqsd is created but not running Mar 9 00:24:11.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:11.454: INFO: stderr: "" Mar 9 00:24:11.454: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-zxz7r " Mar 9 00:24:11.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:11.552: INFO: stderr: "" Mar 9 00:24:11.552: INFO: stdout: "true" Mar 9 00:24:11.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:11.623: INFO: stderr: "" Mar 9 00:24:11.623: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:11.623: INFO: validating pod update-demo-nautilus-cxqsd Mar 9 00:24:11.625: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:11.625: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:11.625: INFO: update-demo-nautilus-cxqsd is verified up and running Mar 9 00:24:11.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxz7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:11.701: INFO: stderr: "" Mar 9 00:24:11.701: INFO: stdout: "true" Mar 9 00:24:11.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxz7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:11.769: INFO: stderr: "" Mar 9 00:24:11.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:11.770: INFO: validating pod update-demo-nautilus-zxz7r Mar 9 00:24:11.773: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:11.773: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:11.773: INFO: update-demo-nautilus-zxz7r is verified up and running STEP: scaling down the replication controller Mar 9 00:24:11.774: INFO: scanned /root for discovery docs: Mar 9 00:24:11.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4835' Mar 9 00:24:12.856: INFO: stderr: "" Mar 9 00:24:12.856: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:24:12.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:12.961: INFO: stderr: "" Mar 9 00:24:12.962: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-zxz7r " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 9 00:24:17.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:18.082: INFO: stderr: "" Mar 9 00:24:18.082: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-zxz7r " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 9 00:24:23.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:23.205: INFO: stderr: "" Mar 9 00:24:23.205: INFO: stdout: "update-demo-nautilus-cxqsd " Mar 9 00:24:23.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:23.304: INFO: stderr: "" Mar 9 00:24:23.304: INFO: stdout: "true" Mar 9 00:24:23.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:23.381: INFO: stderr: "" Mar 9 00:24:23.381: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:23.381: INFO: validating pod update-demo-nautilus-cxqsd Mar 9 00:24:23.383: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:23.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:23.383: INFO: update-demo-nautilus-cxqsd is verified up and running STEP: scaling up the replication controller Mar 9 00:24:23.385: INFO: scanned /root for discovery docs: Mar 9 00:24:23.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4835' Mar 9 00:24:24.488: INFO: stderr: "" Mar 9 00:24:24.488: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 00:24:24.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:24.582: INFO: stderr: "" Mar 9 00:24:24.582: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-xz6kw " Mar 9 00:24:24.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:24.658: INFO: stderr: "" Mar 9 00:24:24.658: INFO: stdout: "true" Mar 9 00:24:24.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:24.745: INFO: stderr: "" Mar 9 00:24:24.745: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:24.745: INFO: validating pod update-demo-nautilus-cxqsd Mar 9 00:24:24.748: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:24.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:24.748: INFO: update-demo-nautilus-cxqsd is verified up and running Mar 9 00:24:24.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xz6kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:24.816: INFO: stderr: "" Mar 9 00:24:24.816: INFO: stdout: "" Mar 9 00:24:24.816: INFO: update-demo-nautilus-xz6kw is created but not running Mar 9 00:24:29.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4835' Mar 9 00:24:29.934: INFO: stderr: "" Mar 9 00:24:29.934: INFO: stdout: "update-demo-nautilus-cxqsd update-demo-nautilus-xz6kw " Mar 9 00:24:29.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:30.018: INFO: stderr: "" Mar 9 00:24:30.018: INFO: stdout: "true" Mar 9 00:24:30.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxqsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:30.105: INFO: stderr: "" Mar 9 00:24:30.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:30.105: INFO: validating pod update-demo-nautilus-cxqsd Mar 9 00:24:30.108: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:30.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:30.108: INFO: update-demo-nautilus-cxqsd is verified up and running Mar 9 00:24:30.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xz6kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:30.184: INFO: stderr: "" Mar 9 00:24:30.184: INFO: stdout: "true" Mar 9 00:24:30.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xz6kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4835' Mar 9 00:24:30.247: INFO: stderr: "" Mar 9 00:24:30.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 00:24:30.248: INFO: validating pod update-demo-nautilus-xz6kw Mar 9 00:24:30.251: INFO: got data: { "image": "nautilus.jpg" } Mar 9 00:24:30.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 00:24:30.251: INFO: update-demo-nautilus-xz6kw is verified up and running STEP: using delete to clean up resources Mar 9 00:24:30.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4835' Mar 9 00:24:30.317: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 00:24:30.317: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 9 00:24:30.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4835' Mar 9 00:24:30.399: INFO: stderr: "No resources found in kubectl-4835 namespace.\n" Mar 9 00:24:30.399: INFO: stdout: "" Mar 9 00:24:30.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4835 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 00:24:30.463: INFO: stderr: "" Mar 9 00:24:30.463: INFO: stdout: "update-demo-nautilus-cxqsd\nupdate-demo-nautilus-xz6kw\n" Mar 9 00:24:30.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4835' Mar 9 00:24:31.053: INFO: stderr: "No resources found in kubectl-4835 namespace.\n" Mar 9 00:24:31.053: INFO: stdout: "" Mar 9 00:24:31.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4835 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 00:24:31.133: INFO: stderr: "" Mar 9 00:24:31.133: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:24:31.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4835" for this suite. • [SLOW TEST:25.434 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":280,"completed":220,"skipped":3584,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:24:31.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:24:31.286: INFO: Create a RollingUpdate DaemonSet Mar 9 00:24:31.288: INFO: Check that daemon pods launch on every node of the cluster Mar 9 00:24:31.297: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:31.316: INFO: Number of nodes with available pods: 0 Mar 9 00:24:31.316: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:24:32.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:32.323: INFO: Number of nodes with available pods: 0 Mar 9 00:24:32.323: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:24:33.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:33.323: INFO: Number of nodes with available pods: 1 Mar 9 00:24:33.323: INFO: Node latest-worker2 is running more than one daemon pod Mar 9 00:24:34.321: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:34.324: INFO: Number of nodes with available pods: 2 Mar 9 00:24:34.324: INFO: Number of running nodes: 2, number of available pods: 2 Mar 9 00:24:34.324: INFO: Update the DaemonSet to trigger a rollout Mar 9 00:24:34.330: INFO: Updating DaemonSet daemon-set Mar 9 00:24:37.363: INFO: Roll back the DaemonSet before rollout is complete Mar 9 00:24:37.369: INFO: Updating DaemonSet daemon-set Mar 9 00:24:37.369: INFO: Make sure DaemonSet rollback is complete Mar 9 00:24:37.375: INFO: Wrong image for pod: daemon-set-8whl5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 9 00:24:37.375: INFO: Pod daemon-set-8whl5 is not available Mar 9 00:24:37.382: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:38.433: INFO: Wrong image for pod: daemon-set-8whl5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 9 00:24:38.433: INFO: Pod daemon-set-8whl5 is not available Mar 9 00:24:38.442: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:39.386: INFO: Pod daemon-set-x2q6l is not available Mar 9 00:24:39.390: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-359, will wait for the garbage collector to delete the pods Mar 9 00:24:39.455: INFO: Deleting DaemonSet.extensions daemon-set took: 5.664627ms Mar 9 00:24:39.556: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.221104ms Mar 9 00:24:52.260: INFO: Number of nodes with available pods: 0 Mar 9 00:24:52.260: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 00:24:52.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-359/daemonsets","resourceVersion":"146347"},"items":null} Mar 9 00:24:52.265: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-359/pods","resourceVersion":"146347"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:24:52.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-359" for this suite. • [SLOW TEST:21.161 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":221,"skipped":3591,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:24:52.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 9 00:24:52.382: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:52.400: INFO: Number of nodes with available pods: 0 Mar 9 00:24:52.400: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:24:53.405: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:53.407: INFO: Number of nodes with available pods: 0 Mar 9 00:24:53.407: INFO: Node latest-worker is running more than one daemon pod Mar 9 00:24:54.406: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:54.409: INFO: Number of nodes with available pods: 1 Mar 9 00:24:54.409: INFO: Node latest-worker2 is running more than one daemon pod Mar 9 00:24:55.406: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:55.409: INFO: Number of nodes with available pods: 2 Mar 9 00:24:55.409: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 9 00:24:55.436: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 00:24:55.457: INFO: Number of nodes with available pods: 2 Mar 9 00:24:55.457: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3085, will wait for the garbage collector to delete the pods Mar 9 00:24:56.588: INFO: Deleting DaemonSet.extensions daemon-set took: 5.206187ms Mar 9 00:24:56.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.252007ms Mar 9 00:26:32.104: INFO: Number of nodes with available pods: 0 Mar 9 00:26:32.104: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 00:26:32.106: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3085/daemonsets","resourceVersion":"146723"},"items":null} Mar 9 00:26:32.108: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3085/pods","resourceVersion":"146723"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:32.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3085" for this suite. • [SLOW TEST:99.842 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":222,"skipped":3613,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:32.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:36.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4470" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3630,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:36.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 9 00:26:36.242: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 00:26:36.289: INFO: Waiting for terminating namespaces to be deleted... Mar 9 00:26:36.294: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 9 00:26:36.308: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.308: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:26:36.308: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.308: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 00:26:36.308: INFO: bin-false4b62d234-d156-427e-993f-ed11ff9af11e from kubelet-test-4470 started at 2020-03-09 00:26:32 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.308: INFO: Container bin-false4b62d234-d156-427e-993f-ed11ff9af11e ready: false, restart count 0 Mar 9 00:26:36.308: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 9 00:26:36.325: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.325: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 00:26:36.325: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.325: INFO: Container coredns ready: true, restart count 0 Mar 9 00:26:36.325: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 9 00:26:36.325: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa7b1fffb7d7f8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:37.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3318" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":224,"skipped":3647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:37.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-eef725f7-0bac-4a8a-92c0-c7fd2e3cba2f STEP: Creating a pod to test consume configMaps Mar 9 00:26:37.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550" in namespace "projected-1629" to be "success or failure" Mar 9 00:26:37.545: INFO: Pod "pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550": Phase="Pending", Reason="", readiness=false. Elapsed: 21.799764ms Mar 9 00:26:39.549: INFO: Pod "pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025406262s STEP: Saw pod success Mar 9 00:26:39.549: INFO: Pod "pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550" satisfied condition "success or failure" Mar 9 00:26:39.551: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550 container projected-configmap-volume-test: STEP: delete the pod Mar 9 00:26:39.570: INFO: Waiting for pod pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550 to disappear Mar 9 00:26:39.575: INFO: Pod pod-projected-configmaps-7d4d1066-c433-40f0-947b-f27dbe37a550 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:39.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1629" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":225,"skipped":3680,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:39.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:26:39.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b" in namespace "projected-2026" to be "success or failure" Mar 9 00:26:39.690: INFO: Pod "downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.206523ms Mar 9 00:26:41.694: INFO: Pod "downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037370189s Mar 9 00:26:43.698: INFO: Pod "downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04094896s STEP: Saw pod success Mar 9 00:26:43.698: INFO: Pod "downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b" satisfied condition "success or failure" Mar 9 00:26:43.700: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b container client-container: STEP: delete the pod Mar 9 00:26:43.781: INFO: Waiting for pod downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b to disappear Mar 9 00:26:43.790: INFO: Pod downwardapi-volume-de5bf129-b34b-4549-8424-d94c004e7e0b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:43.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2026" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":226,"skipped":3698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:43.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-3ba5a626-1cca-43d9-aab1-3176998a9579 STEP: Creating a pod to test consume configMaps Mar 9 00:26:43.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219" in namespace "projected-3465" to be "success or failure" Mar 9 00:26:43.868: INFO: Pod "pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689504ms Mar 9 00:26:45.872: INFO: Pod "pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00782302s Mar 9 00:26:47.877: INFO: Pod "pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011945624s STEP: Saw pod success Mar 9 00:26:47.877: INFO: Pod "pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219" satisfied condition "success or failure" Mar 9 00:26:47.879: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219 container projected-configmap-volume-test: STEP: delete the pod Mar 9 00:26:47.914: INFO: Waiting for pod pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219 to disappear Mar 9 00:26:47.917: INFO: Pod pod-projected-configmaps-a2bb475f-cde4-44dd-96dc-08faba018219 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:47.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3465" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":227,"skipped":3736,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:47.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 9 00:26:50.539: INFO: Successfully updated pod "labelsupdate49a82aec-f4ee-48c1-92fc-80c0d6df5867" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:26:52.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6615" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3739,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:26:52.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4496 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4496 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4496 Mar 9 00:26:52.660: INFO: Found 0 stateful pods, waiting for 1 Mar 9 00:27:02.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 9 00:27:02.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:27:02.911: INFO: stderr: "I0309 00:27:02.821579 2977 log.go:172] (0xc000a7adc0) (0xc000a4e320) Create stream\nI0309 00:27:02.821631 2977 log.go:172] (0xc000a7adc0) (0xc000a4e320) Stream added, broadcasting: 1\nI0309 00:27:02.825674 2977 log.go:172] (0xc000a7adc0) Reply frame received for 1\nI0309 00:27:02.825727 2977 log.go:172] (0xc000a7adc0) (0xc00021d2c0) Create stream\nI0309 00:27:02.825770 2977 log.go:172] (0xc000a7adc0) (0xc00021d2c0) Stream added, broadcasting: 3\nI0309 00:27:02.826678 2977 log.go:172] (0xc000a7adc0) Reply frame received for 3\nI0309 00:27:02.826705 2977 log.go:172] (0xc000a7adc0) (0xc000a5c000) Create stream\nI0309 00:27:02.826713 2977 log.go:172] (0xc000a7adc0) (0xc000a5c000) Stream added, broadcasting: 5\nI0309 00:27:02.827503 2977 log.go:172] (0xc000a7adc0) Reply frame received for 5\nI0309 00:27:02.890159 2977 log.go:172] (0xc000a7adc0) Data frame received for 5\nI0309 00:27:02.890184 2977 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0309 00:27:02.890200 2977 log.go:172] (0xc000a5c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:27:02.906399 2977 log.go:172] (0xc000a7adc0) Data frame received for 3\nI0309 00:27:02.906420 2977 log.go:172] (0xc00021d2c0) (3) Data frame handling\nI0309 00:27:02.906438 2977 log.go:172] (0xc00021d2c0) (3) Data frame sent\nI0309 00:27:02.906586 2977 log.go:172] (0xc000a7adc0) Data frame received for 5\nI0309 00:27:02.906616 2977 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0309 00:27:02.906642 2977 log.go:172] (0xc000a7adc0) Data frame received for 3\nI0309 00:27:02.906652 2977 log.go:172] (0xc00021d2c0) (3) Data frame handling\nI0309 00:27:02.908190 2977 log.go:172] (0xc000a7adc0) Data frame received for 1\nI0309 00:27:02.908205 2977 log.go:172] (0xc000a4e320) (1) Data frame handling\nI0309 00:27:02.908214 2977 log.go:172] (0xc000a4e320) (1) Data frame sent\nI0309 00:27:02.908222 2977 log.go:172] (0xc000a7adc0) (0xc000a4e320) Stream removed, broadcasting: 1\nI0309 00:27:02.908265 2977 log.go:172] (0xc000a7adc0) Go away received\nI0309 00:27:02.908505 2977 log.go:172] (0xc000a7adc0) (0xc000a4e320) Stream removed, broadcasting: 1\nI0309 00:27:02.908519 2977 log.go:172] (0xc000a7adc0) (0xc00021d2c0) Stream removed, broadcasting: 3\nI0309 00:27:02.908525 2977 log.go:172] (0xc000a7adc0) (0xc000a5c000) Stream removed, broadcasting: 5\n" Mar 9 00:27:02.911: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:27:02.911: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:27:02.913: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 9 00:27:12.918: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:27:12.918: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:27:12.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999205s Mar 9 00:27:13.964: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.96587226s Mar 9 00:27:14.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961964394s Mar 9 00:27:15.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957577732s Mar 9 00:27:16.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954118335s Mar 9 00:27:17.980: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950130377s Mar 9 00:27:18.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946038239s Mar 9 00:27:19.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.942030167s Mar 9 00:27:20.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.938101432s Mar 9 00:27:21.996: INFO: Verifying statefulset ss doesn't scale past 1 for another 933.770083ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4496 Mar 9 00:27:23.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:27:23.202: INFO: stderr: "I0309 00:27:23.148367 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4000) Create stream\nI0309 00:27:23.148418 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4000) Stream added, broadcasting: 1\nI0309 00:27:23.150379 2998 log.go:172] (0xc0003c4fd0) Reply frame received for 1\nI0309 00:27:23.150411 2998 log.go:172] (0xc0003c4fd0) (0xc0008c40a0) Create stream\nI0309 00:27:23.150419 2998 log.go:172] (0xc0003c4fd0) (0xc0008c40a0) Stream added, broadcasting: 3\nI0309 00:27:23.151116 2998 log.go:172] (0xc0003c4fd0) Reply frame received for 3\nI0309 00:27:23.151146 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4140) Create stream\nI0309 00:27:23.151157 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4140) Stream added, broadcasting: 5\nI0309 00:27:23.151972 2998 log.go:172] (0xc0003c4fd0) Reply frame received for 5\nI0309 00:27:23.197600 2998 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0309 00:27:23.197627 2998 log.go:172] (0xc0008c40a0) (3) Data frame handling\nI0309 00:27:23.197638 2998 log.go:172] (0xc0008c40a0) (3) Data frame sent\nI0309 00:27:23.197645 2998 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0309 00:27:23.197651 2998 log.go:172] (0xc0008c40a0) (3) Data frame handling\nI0309 00:27:23.197695 2998 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0309 00:27:23.197748 2998 log.go:172] (0xc0008c4140) (5) Data frame handling\nI0309 00:27:23.197774 2998 log.go:172] (0xc0008c4140) (5) Data frame sent\nI0309 00:27:23.197790 2998 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0309 00:27:23.197799 2998 log.go:172] (0xc0008c4140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 00:27:23.199149 2998 log.go:172] (0xc0003c4fd0) Data frame received for 1\nI0309 00:27:23.199171 2998 log.go:172] (0xc0008c4000) (1) Data frame handling\nI0309 00:27:23.199192 2998 log.go:172] (0xc0008c4000) (1) Data frame sent\nI0309 00:27:23.199222 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4000) Stream removed, broadcasting: 1\nI0309 00:27:23.199250 2998 log.go:172] (0xc0003c4fd0) Go away received\nI0309 00:27:23.199576 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4000) Stream removed, broadcasting: 1\nI0309 00:27:23.199596 2998 log.go:172] (0xc0003c4fd0) (0xc0008c40a0) Stream removed, broadcasting: 3\nI0309 00:27:23.199606 2998 log.go:172] (0xc0003c4fd0) (0xc0008c4140) Stream removed, broadcasting: 5\n" Mar 9 00:27:23.202: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:27:23.202: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:27:23.206: INFO: Found 1 stateful pods, waiting for 3 Mar 9 00:27:33.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 00:27:33.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 00:27:33.210: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 9 00:27:33.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:27:33.388: INFO: stderr: "I0309 00:27:33.328586 3017 log.go:172] (0xc000750210) (0xc0005d5e00) Create stream\nI0309 00:27:33.328641 3017 log.go:172] (0xc000750210) (0xc0005d5e00) Stream added, broadcasting: 1\nI0309 00:27:33.330522 3017 log.go:172] (0xc000750210) Reply frame received for 1\nI0309 00:27:33.330552 3017 log.go:172] (0xc000750210) (0xc00055e780) Create stream\nI0309 00:27:33.330560 3017 log.go:172] (0xc000750210) (0xc00055e780) Stream added, broadcasting: 3\nI0309 00:27:33.331323 3017 log.go:172] (0xc000750210) Reply frame received for 3\nI0309 00:27:33.331356 3017 log.go:172] (0xc000750210) (0xc0005d5ea0) Create stream\nI0309 00:27:33.331368 3017 log.go:172] (0xc000750210) (0xc0005d5ea0) Stream added, broadcasting: 5\nI0309 00:27:33.332265 3017 log.go:172] (0xc000750210) Reply frame received for 5\nI0309 00:27:33.384452 3017 log.go:172] (0xc000750210) Data frame received for 5\nI0309 00:27:33.384485 3017 log.go:172] (0xc0005d5ea0) (5) Data frame handling\nI0309 00:27:33.384499 3017 log.go:172] (0xc0005d5ea0) (5) Data frame sent\nI0309 00:27:33.384510 3017 log.go:172] (0xc000750210) Data frame received for 5\nI0309 00:27:33.384520 3017 log.go:172] (0xc0005d5ea0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:27:33.384544 3017 log.go:172] (0xc000750210) Data frame received for 3\nI0309 00:27:33.384558 3017 log.go:172] (0xc00055e780) (3) Data frame handling\nI0309 00:27:33.384569 3017 log.go:172] (0xc00055e780) (3) Data frame sent\nI0309 00:27:33.384576 3017 log.go:172] (0xc000750210) Data frame received for 3\nI0309 00:27:33.384582 3017 log.go:172] (0xc00055e780) (3) Data frame handling\nI0309 00:27:33.385761 3017 log.go:172] (0xc000750210) Data frame received for 1\nI0309 00:27:33.385784 3017 log.go:172] (0xc0005d5e00) (1) Data frame handling\nI0309 00:27:33.385799 3017 log.go:172] (0xc0005d5e00) (1) Data frame sent\nI0309 00:27:33.385811 3017 log.go:172] (0xc000750210) (0xc0005d5e00) Stream removed, broadcasting: 1\nI0309 00:27:33.385826 3017 log.go:172] (0xc000750210) Go away received\nI0309 00:27:33.386213 3017 log.go:172] (0xc000750210) (0xc0005d5e00) Stream removed, broadcasting: 1\nI0309 00:27:33.386230 3017 log.go:172] (0xc000750210) (0xc00055e780) Stream removed, broadcasting: 3\nI0309 00:27:33.386239 3017 log.go:172] (0xc000750210) (0xc0005d5ea0) Stream removed, broadcasting: 5\n" Mar 9 00:27:33.389: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:27:33.389: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:27:33.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:27:33.588: INFO: stderr: "I0309 00:27:33.487794 3035 log.go:172] (0xc0009c74a0) (0xc000a4a6e0) Create stream\nI0309 00:27:33.487837 3035 log.go:172] (0xc0009c74a0) (0xc000a4a6e0) Stream added, broadcasting: 1\nI0309 00:27:33.491534 3035 log.go:172] (0xc0009c74a0) Reply frame received for 1\nI0309 00:27:33.491587 3035 log.go:172] (0xc0009c74a0) (0xc0005fe6e0) Create stream\nI0309 00:27:33.491603 3035 log.go:172] (0xc0009c74a0) (0xc0005fe6e0) Stream added, broadcasting: 3\nI0309 00:27:33.492564 3035 log.go:172] (0xc0009c74a0) Reply frame received for 3\nI0309 00:27:33.492590 3035 log.go:172] (0xc0009c74a0) (0xc00077b360) Create stream\nI0309 00:27:33.492598 3035 log.go:172] (0xc0009c74a0) (0xc00077b360) Stream added, broadcasting: 5\nI0309 00:27:33.493266 3035 log.go:172] (0xc0009c74a0) Reply frame received for 5\nI0309 00:27:33.564176 3035 log.go:172] (0xc0009c74a0) Data frame received for 5\nI0309 00:27:33.564199 3035 log.go:172] (0xc00077b360) (5) Data frame handling\nI0309 00:27:33.564207 3035 log.go:172] (0xc00077b360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:27:33.578170 3035 log.go:172] (0xc0009c74a0) Data frame received for 5\nI0309 00:27:33.578186 3035 log.go:172] (0xc00077b360) (5) Data frame handling\nI0309 00:27:33.578215 3035 log.go:172] (0xc0009c74a0) Data frame received for 3\nI0309 00:27:33.578247 3035 log.go:172] (0xc0005fe6e0) (3) Data frame handling\nI0309 00:27:33.578272 3035 log.go:172] (0xc0005fe6e0) (3) Data frame sent\nI0309 00:27:33.578286 3035 log.go:172] (0xc0009c74a0) Data frame received for 3\nI0309 00:27:33.578304 3035 log.go:172] (0xc0005fe6e0) (3) Data frame handling\nI0309 00:27:33.585446 3035 log.go:172] (0xc0009c74a0) Data frame received for 1\nI0309 00:27:33.585486 3035 log.go:172] (0xc000a4a6e0) (1) Data frame handling\nI0309 00:27:33.585513 3035 log.go:172] (0xc000a4a6e0) (1) Data frame sent\nI0309 00:27:33.585542 3035 log.go:172] (0xc0009c74a0) (0xc000a4a6e0) Stream removed, broadcasting: 1\nI0309 00:27:33.585575 3035 log.go:172] (0xc0009c74a0) Go away received\nI0309 00:27:33.585863 3035 log.go:172] (0xc0009c74a0) (0xc000a4a6e0) Stream removed, broadcasting: 1\nI0309 00:27:33.585882 3035 log.go:172] (0xc0009c74a0) (0xc0005fe6e0) Stream removed, broadcasting: 3\nI0309 00:27:33.585889 3035 log.go:172] (0xc0009c74a0) (0xc00077b360) Stream removed, broadcasting: 5\n" Mar 9 00:27:33.588: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:27:33.588: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:27:33.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:27:33.823: INFO: stderr: "I0309 00:27:33.727637 3055 log.go:172] (0xc0009b3ad0) (0xc000a9a960) Create stream\nI0309 00:27:33.727696 3055 log.go:172] (0xc0009b3ad0) (0xc000a9a960) Stream added, broadcasting: 1\nI0309 00:27:33.731655 3055 log.go:172] (0xc0009b3ad0) Reply frame received for 1\nI0309 00:27:33.731732 3055 log.go:172] (0xc0009b3ad0) (0xc0005f8640) Create stream\nI0309 00:27:33.731755 3055 log.go:172] (0xc0009b3ad0) (0xc0005f8640) Stream added, broadcasting: 3\nI0309 00:27:33.733072 3055 log.go:172] (0xc0009b3ad0) Reply frame received for 3\nI0309 00:27:33.733135 3055 log.go:172] (0xc0009b3ad0) (0xc0002e72c0) Create stream\nI0309 00:27:33.733158 3055 log.go:172] (0xc0009b3ad0) (0xc0002e72c0) Stream added, broadcasting: 5\nI0309 00:27:33.734632 3055 log.go:172] (0xc0009b3ad0) Reply frame received for 5\nI0309 00:27:33.794772 3055 log.go:172] (0xc0009b3ad0) Data frame received for 5\nI0309 00:27:33.794798 3055 log.go:172] (0xc0002e72c0) (5) Data frame handling\nI0309 00:27:33.794816 3055 log.go:172] (0xc0002e72c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:27:33.818086 3055 log.go:172] (0xc0009b3ad0) Data frame received for 5\nI0309 00:27:33.818152 3055 log.go:172] (0xc0002e72c0) (5) Data frame handling\nI0309 00:27:33.818174 3055 log.go:172] (0xc0009b3ad0) Data frame received for 3\nI0309 00:27:33.818187 3055 log.go:172] (0xc0005f8640) (3) Data frame handling\nI0309 00:27:33.818198 3055 log.go:172] (0xc0005f8640) (3) Data frame sent\nI0309 00:27:33.818207 3055 log.go:172] (0xc0009b3ad0) Data frame received for 3\nI0309 00:27:33.818216 3055 log.go:172] (0xc0005f8640) (3) Data frame handling\nI0309 00:27:33.820188 3055 log.go:172] (0xc0009b3ad0) Data frame received for 1\nI0309 00:27:33.820208 3055 log.go:172] (0xc000a9a960) (1) Data frame handling\nI0309 00:27:33.820215 3055 log.go:172] (0xc000a9a960) (1) Data frame sent\nI0309 00:27:33.820228 3055 log.go:172] (0xc0009b3ad0) (0xc000a9a960) Stream removed, broadcasting: 1\nI0309 00:27:33.820263 3055 log.go:172] (0xc0009b3ad0) Go away received\nI0309 00:27:33.820462 3055 log.go:172] (0xc0009b3ad0) (0xc000a9a960) Stream removed, broadcasting: 1\nI0309 00:27:33.820476 3055 log.go:172] (0xc0009b3ad0) (0xc0005f8640) Stream removed, broadcasting: 3\nI0309 00:27:33.820483 3055 log.go:172] (0xc0009b3ad0) (0xc0002e72c0) Stream removed, broadcasting: 5\n" Mar 9 00:27:33.823: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:27:33.823: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:27:33.823: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:27:33.834: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 9 00:27:43.842: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:27:43.842: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:27:43.842: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:27:43.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999547s Mar 9 00:27:44.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976699543s Mar 9 00:27:45.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965578384s Mar 9 00:27:46.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960986191s Mar 9 00:27:47.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956362893s Mar 9 00:27:48.901: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.951647156s Mar 9 00:27:49.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947491027s Mar 9 00:27:50.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.942725856s Mar 9 00:27:51.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938752472s Mar 9 00:27:52.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 933.548568ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4496 Mar 9 00:27:53.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:27:54.256: INFO: stderr: "I0309 00:27:54.187789 3075 log.go:172] (0xc00003a0b0) (0xc000a50000) Create stream\nI0309 00:27:54.187821 3075 log.go:172] (0xc00003a0b0) (0xc000a50000) Stream added, broadcasting: 1\nI0309 00:27:54.191012 3075 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0309 00:27:54.191052 3075 log.go:172] (0xc00003a0b0) (0xc000615b80) Create stream\nI0309 00:27:54.191066 3075 log.go:172] (0xc00003a0b0) (0xc000615b80) Stream added, broadcasting: 3\nI0309 00:27:54.194997 3075 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0309 00:27:54.195022 3075 log.go:172] (0xc00003a0b0) (0xc000615c20) Create stream\nI0309 00:27:54.195030 3075 log.go:172] (0xc00003a0b0) (0xc000615c20) Stream added, broadcasting: 5\nI0309 00:27:54.195953 3075 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0309 00:27:54.252087 3075 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0309 00:27:54.252113 3075 log.go:172] (0xc000615c20) (5) Data frame handling\nI0309 00:27:54.252122 3075 log.go:172] (0xc000615c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 00:27:54.252137 3075 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0309 00:27:54.252181 3075 log.go:172] (0xc000615b80) (3) Data frame handling\nI0309 00:27:54.252203 3075 log.go:172] (0xc000615b80) (3) Data frame sent\nI0309 00:27:54.252217 3075 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0309 00:27:54.252225 3075 log.go:172] (0xc000615b80) (3) Data frame handling\nI0309 00:27:54.252256 3075 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0309 00:27:54.252273 3075 log.go:172] (0xc000615c20) (5) Data frame handling\nI0309 00:27:54.253516 3075 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0309 00:27:54.253534 3075 log.go:172] (0xc000a50000) (1) Data frame handling\nI0309 00:27:54.253547 3075 log.go:172] (0xc000a50000) (1) Data frame sent\nI0309 00:27:54.253568 3075 log.go:172] (0xc00003a0b0) (0xc000a50000) Stream removed, broadcasting: 1\nI0309 00:27:54.253582 3075 log.go:172] (0xc00003a0b0) Go away received\nI0309 00:27:54.253857 3075 log.go:172] (0xc00003a0b0) (0xc000a50000) Stream removed, broadcasting: 1\nI0309 00:27:54.253869 3075 log.go:172] (0xc00003a0b0) (0xc000615b80) Stream removed, broadcasting: 3\nI0309 00:27:54.253874 3075 log.go:172] (0xc00003a0b0) (0xc000615c20) Stream removed, broadcasting: 5\n" Mar 9 00:27:54.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:27:54.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:27:54.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:27:54.429: INFO: stderr: "I0309 00:27:54.364921 3097 log.go:172] (0xc00031aa50) (0xc0007f60a0) Create stream\nI0309 00:27:54.364968 3097 log.go:172] (0xc00031aa50) (0xc0007f60a0) Stream added, broadcasting: 1\nI0309 00:27:54.367529 3097 log.go:172] (0xc00031aa50) Reply frame received for 1\nI0309 00:27:54.367565 3097 log.go:172] (0xc00031aa50) (0xc0006c5a40) Create stream\nI0309 00:27:54.367573 3097 log.go:172] (0xc00031aa50) (0xc0006c5a40) Stream added, broadcasting: 3\nI0309 00:27:54.368388 3097 log.go:172] (0xc00031aa50) Reply frame received for 3\nI0309 00:27:54.368417 3097 log.go:172] (0xc00031aa50) (0xc00058e000) Create stream\nI0309 00:27:54.368429 3097 log.go:172] (0xc00031aa50) (0xc00058e000) Stream added, broadcasting: 5\nI0309 00:27:54.369054 3097 log.go:172] (0xc00031aa50) Reply frame received for 5\nI0309 00:27:54.424909 3097 log.go:172] (0xc00031aa50) Data frame received for 5\nI0309 00:27:54.424935 3097 log.go:172] (0xc00058e000) (5) Data frame handling\nI0309 00:27:54.424953 3097 log.go:172] (0xc00058e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 00:27:54.424962 3097 log.go:172] (0xc00031aa50) Data frame received for 5\nI0309 00:27:54.424969 3097 log.go:172] (0xc00058e000) (5) Data frame handling\nI0309 00:27:54.424982 3097 log.go:172] (0xc00031aa50) Data frame received for 3\nI0309 00:27:54.424987 3097 log.go:172] (0xc0006c5a40) (3) Data frame handling\nI0309 00:27:54.424993 3097 log.go:172] (0xc0006c5a40) (3) Data frame sent\nI0309 00:27:54.425002 3097 log.go:172] (0xc00031aa50) Data frame received for 3\nI0309 00:27:54.425006 3097 log.go:172] (0xc0006c5a40) (3) Data frame handling\nI0309 00:27:54.426310 3097 log.go:172] (0xc00031aa50) Data frame received for 1\nI0309 00:27:54.426340 3097 log.go:172] (0xc0007f60a0) (1) Data frame handling\nI0309 00:27:54.426349 3097 log.go:172] (0xc0007f60a0) (1) Data frame sent\nI0309 00:27:54.426360 3097 log.go:172] (0xc00031aa50) (0xc0007f60a0) Stream removed, broadcasting: 1\nI0309 00:27:54.426372 3097 log.go:172] (0xc00031aa50) Go away received\nI0309 00:27:54.426749 3097 log.go:172] (0xc00031aa50) (0xc0007f60a0) Stream removed, broadcasting: 1\nI0309 00:27:54.426769 3097 log.go:172] (0xc00031aa50) (0xc0006c5a40) Stream removed, broadcasting: 3\nI0309 00:27:54.426778 3097 log.go:172] (0xc00031aa50) (0xc00058e000) Stream removed, broadcasting: 5\n" Mar 9 00:27:54.429: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:27:54.429: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:27:54.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4496 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:27:54.638: INFO: stderr: "I0309 00:27:54.549692 3118 log.go:172] (0xc0009f13f0) (0xc0009946e0) Create stream\nI0309 00:27:54.549746 3118 log.go:172] (0xc0009f13f0) (0xc0009946e0) Stream added, broadcasting: 1\nI0309 00:27:54.553352 3118 log.go:172] (0xc0009f13f0) Reply frame received for 1\nI0309 00:27:54.553396 3118 log.go:172] (0xc0009f13f0) (0xc00066a5a0) Create stream\nI0309 00:27:54.553411 3118 log.go:172] (0xc0009f13f0) (0xc00066a5a0) Stream added, broadcasting: 3\nI0309 00:27:54.554350 3118 log.go:172] (0xc0009f13f0) Reply frame received for 3\nI0309 00:27:54.554387 3118 log.go:172] (0xc0009f13f0) (0xc0004c9220) Create stream\nI0309 00:27:54.554400 3118 log.go:172] (0xc0009f13f0) (0xc0004c9220) Stream added, broadcasting: 5\nI0309 00:27:54.555526 3118 log.go:172] (0xc0009f13f0) Reply frame received for 5\nI0309 00:27:54.611895 3118 log.go:172] (0xc0009f13f0) Data frame received for 5\nI0309 00:27:54.611921 3118 log.go:172] (0xc0004c9220) (5) Data frame handling\nI0309 00:27:54.611934 3118 log.go:172] (0xc0004c9220) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 00:27:54.633218 3118 log.go:172] (0xc0009f13f0) Data frame received for 3\nI0309 00:27:54.633241 3118 log.go:172] (0xc00066a5a0) (3) Data frame handling\nI0309 00:27:54.633272 3118 log.go:172] (0xc00066a5a0) (3) Data frame sent\nI0309 00:27:54.633390 3118 log.go:172] (0xc0009f13f0) Data frame received for 3\nI0309 00:27:54.633404 3118 log.go:172] (0xc00066a5a0) (3) Data frame handling\nI0309 00:27:54.633423 3118 log.go:172] (0xc0009f13f0) Data frame received for 5\nI0309 00:27:54.633441 3118 log.go:172] (0xc0004c9220) (5) Data frame handling\nI0309 00:27:54.635100 3118 log.go:172] (0xc0009f13f0) Data frame received for 1\nI0309 00:27:54.635127 3118 log.go:172] (0xc0009946e0) (1) Data frame handling\nI0309 00:27:54.635141 3118 log.go:172] (0xc0009946e0) (1) Data frame sent\nI0309 00:27:54.635151 3118 log.go:172] (0xc0009f13f0) (0xc0009946e0) Stream removed, broadcasting: 1\nI0309 00:27:54.635164 3118 log.go:172] (0xc0009f13f0) Go away received\nI0309 00:27:54.635458 3118 log.go:172] (0xc0009f13f0) (0xc0009946e0) Stream removed, broadcasting: 1\nI0309 00:27:54.635474 3118 log.go:172] (0xc0009f13f0) (0xc00066a5a0) Stream removed, broadcasting: 3\nI0309 00:27:54.635480 3118 log.go:172] (0xc0009f13f0) (0xc0004c9220) Stream removed, broadcasting: 5\n" Mar 9 00:27:54.638: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:27:54.638: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:27:54.638: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 9 00:28:04.665: INFO: Deleting all statefulset in ns statefulset-4496 Mar 9 00:28:04.668: INFO: Scaling statefulset ss to 0 Mar 9 00:28:04.675: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:28:04.677: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:04.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4496" for this suite. • [SLOW TEST:72.135 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":229,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:04.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:06.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4138" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":230,"skipped":3756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:06.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 00:28:07.433: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 00:28:09.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719310487, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719310487, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719310487, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719310487, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 00:28:12.471: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 9 00:28:14.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config attach --namespace=webhook-2827 to-be-attached-pod -i -c=container1' Mar 9 00:28:14.673: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:14.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2827" for this suite. STEP: Destroying namespace "webhook-2827-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.868 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":231,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:14.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Mar 9 00:28:14.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6747 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 9 00:28:14.952: INFO: stderr: "" Mar 9 00:28:14.952: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Mar 9 00:28:14.952: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 9 00:28:14.952: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6747" to be "running and ready, or succeeded" Mar 9 00:28:14.959: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.71351ms Mar 9 00:28:16.963: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011235657s Mar 9 00:28:18.966: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.014347808s Mar 9 00:28:18.966: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 9 00:28:18.966: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 9 00:28:18.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747' Mar 9 00:28:19.078: INFO: stderr: "" Mar 9 00:28:19.078: INFO: stdout: "I0309 00:28:16.109312 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/cwv 470\nI0309 00:28:16.309430 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/rdv 520\nI0309 00:28:16.509549 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/9vg 338\nI0309 00:28:16.709484 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/bndt 352\nI0309 00:28:16.909530 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2wr9 537\nI0309 00:28:17.109561 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5qv 445\nI0309 00:28:17.309485 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/662 477\nI0309 00:28:17.509536 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/9hnr 270\nI0309 00:28:17.709515 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/pbxx 328\nI0309 00:28:17.909527 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/z5bd 485\nI0309 00:28:18.109497 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/7glf 426\nI0309 00:28:18.309487 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/6d4d 528\nI0309 00:28:18.509447 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/tvj 362\nI0309 00:28:18.709456 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/5lbd 431\nI0309 00:28:18.909471 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/lw56 209\n" STEP: limiting log lines Mar 9 00:28:19.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747 --tail=1' Mar 9 00:28:19.152: INFO: stderr: "" Mar 9 00:28:19.152: INFO: stdout: "I0309 00:28:19.109486 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/cpdz 535\n" Mar 9 00:28:19.152: INFO: got output "I0309 00:28:19.109486 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/cpdz 535\n" STEP: limiting log bytes Mar 9 00:28:19.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747 --limit-bytes=1' Mar 9 00:28:19.226: INFO: stderr: "" Mar 9 00:28:19.226: INFO: stdout: "I" Mar 9 00:28:19.226: INFO: got output "I" STEP: exposing timestamps Mar 9 00:28:19.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747 --tail=1 --timestamps' Mar 9 00:28:19.302: INFO: stderr: "" Mar 9 00:28:19.302: INFO: stdout: "2020-03-09T00:28:19.109609128Z I0309 00:28:19.109486 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/cpdz 535\n" Mar 9 00:28:19.302: INFO: got output "2020-03-09T00:28:19.109609128Z I0309 00:28:19.109486 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/cpdz 535\n" STEP: restricting to a time range Mar 9 00:28:21.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747 --since=1s' Mar 9 00:28:21.942: INFO: stderr: "" Mar 9 00:28:21.942: INFO: stdout: "I0309 00:28:21.109532 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/dtd8 361\nI0309 00:28:21.309508 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/trt 443\nI0309 00:28:21.509536 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/d6tx 324\nI0309 00:28:21.709524 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/mft 385\nI0309 00:28:21.909510 1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/vjvf 417\n" Mar 9 00:28:21.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6747 --since=24h' Mar 9 00:28:22.062: INFO: stderr: "" Mar 9 00:28:22.062: INFO: stdout: "I0309 00:28:16.109312 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/cwv 470\nI0309 00:28:16.309430 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/rdv 520\nI0309 00:28:16.509549 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/9vg 338\nI0309 00:28:16.709484 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/bndt 352\nI0309 00:28:16.909530 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2wr9 537\nI0309 00:28:17.109561 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5qv 445\nI0309 00:28:17.309485 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/662 477\nI0309 00:28:17.509536 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/9hnr 270\nI0309 00:28:17.709515 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/pbxx 328\nI0309 00:28:17.909527 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/z5bd 485\nI0309 00:28:18.109497 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/7glf 426\nI0309 00:28:18.309487 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/6d4d 528\nI0309 00:28:18.509447 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/tvj 362\nI0309 00:28:18.709456 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/5lbd 431\nI0309 00:28:18.909471 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/lw56 209\nI0309 00:28:19.109486 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/cpdz 535\nI0309 00:28:19.309470 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/lzvr 413\nI0309 00:28:19.509506 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/rdk5 249\nI0309 00:28:19.709501 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/w7ns 267\nI0309 00:28:19.909557 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/s58r 241\nI0309 00:28:20.109525 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/4q4l 251\nI0309 00:28:20.309468 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/6q78 247\nI0309 00:28:20.509498 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/ddc 249\nI0309 00:28:20.709557 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/dj9 221\nI0309 00:28:20.909483 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/dxj 316\nI0309 00:28:21.109532 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/dtd8 361\nI0309 00:28:21.309508 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/trt 443\nI0309 00:28:21.509536 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/d6tx 324\nI0309 00:28:21.709524 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/mft 385\nI0309 00:28:21.909510 1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/vjvf 417\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Mar 9 00:28:22.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6747' Mar 9 00:28:32.595: INFO: stderr: "" Mar 9 00:28:32.595: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:32.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6747" for this suite. • [SLOW TEST:17.884 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":232,"skipped":3856,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:32.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-0def63b1-d546-4678-87c1-9979dd051ffd STEP: Creating a pod to test consume configMaps Mar 9 00:28:32.708: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c" in namespace "projected-1949" to be "success or failure" Mar 9 00:28:32.717: INFO: Pod "pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.362032ms Mar 9 00:28:34.721: INFO: Pod "pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013397056s STEP: Saw pod success Mar 9 00:28:34.721: INFO: Pod "pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c" satisfied condition "success or failure" Mar 9 00:28:34.724: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c container projected-configmap-volume-test: STEP: delete the pod Mar 9 00:28:34.759: INFO: Waiting for pod pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c to disappear Mar 9 00:28:34.765: INFO: Pod pod-projected-configmaps-1daf23f6-969b-44c7-94fe-e8616aeca19c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:34.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1949" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3862,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:34.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Mar 9 00:28:34.849: INFO: Waiting up to 5m0s for pod "var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35" in namespace "var-expansion-946" to be "success or failure" Mar 9 00:28:34.854: INFO: Pod "var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35": Phase="Pending", Reason="", readiness=false. Elapsed: 5.294246ms Mar 9 00:28:36.859: INFO: Pod "var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00940823s STEP: Saw pod success Mar 9 00:28:36.859: INFO: Pod "var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35" satisfied condition "success or failure" Mar 9 00:28:36.861: INFO: Trying to get logs from node latest-worker pod var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35 container dapi-container: STEP: delete the pod Mar 9 00:28:36.884: INFO: Waiting for pod var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35 to disappear Mar 9 00:28:36.887: INFO: Pod var-expansion-6f6598a5-6a6d-4d99-aa66-dba6e073ec35 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:36.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-946" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3881,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:36.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:28:37.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2" in namespace "downward-api-425" to be "success or failure" Mar 9 00:28:37.030: INFO: Pod "downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.525532ms Mar 9 00:28:39.034: INFO: Pod "downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032994394s Mar 9 00:28:41.038: INFO: Pod "downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037344878s STEP: Saw pod success Mar 9 00:28:41.038: INFO: Pod "downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2" satisfied condition "success or failure" Mar 9 00:28:41.042: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2 container client-container: STEP: delete the pod Mar 9 00:28:41.057: INFO: Waiting for pod downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2 to disappear Mar 9 00:28:41.062: INFO: Pod downwardapi-volume-256f0dbf-c01b-4ed9-9b95-1f592903ffd2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:41.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-425" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":235,"skipped":3897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:41.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:28:41.153: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-125 I0309 00:28:41.179066 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-125, replica count: 1 I0309 00:28:42.229436 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0309 00:28:43.229622 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 00:28:43.379: INFO: Created: latency-svc-bd6wf Mar 9 00:28:43.412: INFO: Got endpoints: latency-svc-bd6wf [82.438181ms] Mar 9 00:28:43.438: INFO: Created: latency-svc-bfp9g Mar 9 00:28:43.456: INFO: Got endpoints: latency-svc-bfp9g [44.388791ms] Mar 9 00:28:43.457: INFO: Created: latency-svc-vwtc5 Mar 9 00:28:43.474: INFO: Got endpoints: latency-svc-vwtc5 [62.589113ms] Mar 9 00:28:43.475: INFO: Created: latency-svc-kh76b Mar 9 00:28:43.499: INFO: Created: latency-svc-wjsvw Mar 9 00:28:43.499: INFO: Got endpoints: latency-svc-kh76b [87.319755ms] Mar 9 00:28:43.542: INFO: Got endpoints: latency-svc-wjsvw [129.687874ms] Mar 9 00:28:43.553: INFO: Created: latency-svc-vhnl5 Mar 9 00:28:43.577: INFO: Got endpoints: latency-svc-vhnl5 [164.928412ms] Mar 9 00:28:43.579: INFO: Created: latency-svc-j5xr6 Mar 9 00:28:43.600: INFO: Got endpoints: latency-svc-j5xr6 [188.177197ms] Mar 9 00:28:43.601: INFO: Created: latency-svc-c5q45 Mar 9 00:28:43.616: INFO: Got endpoints: latency-svc-c5q45 [203.637875ms] Mar 9 00:28:43.668: INFO: Created: latency-svc-f6rs6 Mar 9 00:28:43.678: INFO: Got endpoints: latency-svc-f6rs6 [266.066597ms] Mar 9 00:28:43.696: INFO: Created: latency-svc-m4mv5 Mar 9 00:28:43.706: INFO: Got endpoints: latency-svc-m4mv5 [293.562108ms] Mar 9 00:28:43.720: INFO: Created: latency-svc-zdzsv Mar 9 00:28:43.730: INFO: Got endpoints: latency-svc-zdzsv [317.361485ms] Mar 9 00:28:43.751: INFO: Created: latency-svc-jptwg Mar 9 00:28:43.757: INFO: Got endpoints: latency-svc-jptwg [344.905996ms] Mar 9 00:28:43.794: INFO: Created: latency-svc-6qd5x Mar 9 00:28:43.810: INFO: Created: latency-svc-wc7lq Mar 9 00:28:43.811: INFO: Got endpoints: latency-svc-6qd5x [398.247632ms] Mar 9 00:28:43.817: INFO: Got endpoints: latency-svc-wc7lq [404.391133ms] Mar 9 00:28:43.834: INFO: Created: latency-svc-bc7rb Mar 9 00:28:43.847: INFO: Got endpoints: latency-svc-bc7rb [435.042581ms] Mar 9 00:28:43.864: INFO: Created: latency-svc-5pt5v Mar 9 00:28:43.876: INFO: Got endpoints: latency-svc-5pt5v [464.514943ms] Mar 9 00:28:43.889: INFO: Created: latency-svc-lnmxh Mar 9 00:28:43.919: INFO: Got endpoints: latency-svc-lnmxh [462.952581ms] Mar 9 00:28:43.931: INFO: Created: latency-svc-hx7fs Mar 9 00:28:43.938: INFO: Got endpoints: latency-svc-hx7fs [463.039685ms] Mar 9 00:28:43.961: INFO: Created: latency-svc-q669n Mar 9 00:28:43.979: INFO: Created: latency-svc-4tq6z Mar 9 00:28:43.979: INFO: Got endpoints: latency-svc-q669n [479.739755ms] Mar 9 00:28:43.997: INFO: Got endpoints: latency-svc-4tq6z [454.759537ms] Mar 9 00:28:44.046: INFO: Created: latency-svc-wp2ff Mar 9 00:28:44.068: INFO: Got endpoints: latency-svc-wp2ff [491.121357ms] Mar 9 00:28:44.068: INFO: Created: latency-svc-f97jp Mar 9 00:28:44.077: INFO: Got endpoints: latency-svc-f97jp [477.153368ms] Mar 9 00:28:44.105: INFO: Created: latency-svc-72d8z Mar 9 00:28:44.113: INFO: Got endpoints: latency-svc-72d8z [497.126174ms] Mar 9 00:28:44.129: INFO: Created: latency-svc-92l58 Mar 9 00:28:44.137: INFO: Got endpoints: latency-svc-92l58 [458.636379ms] Mar 9 00:28:44.170: INFO: Created: latency-svc-dw4ll Mar 9 00:28:44.194: INFO: Created: latency-svc-r85mm Mar 9 00:28:44.195: INFO: Got endpoints: latency-svc-dw4ll [489.183523ms] Mar 9 00:28:44.206: INFO: Got endpoints: latency-svc-r85mm [476.303053ms] Mar 9 00:28:44.218: INFO: Created: latency-svc-rxhmj Mar 9 00:28:44.248: INFO: Created: latency-svc-zd96w Mar 9 00:28:44.249: INFO: Got endpoints: latency-svc-rxhmj [491.418841ms] Mar 9 00:28:44.267: INFO: Got endpoints: latency-svc-zd96w [456.444211ms] Mar 9 00:28:44.267: INFO: Created: latency-svc-5xkgg Mar 9 00:28:44.308: INFO: Got endpoints: latency-svc-5xkgg [491.260979ms] Mar 9 00:28:44.309: INFO: Created: latency-svc-dqwn7 Mar 9 00:28:44.315: INFO: Got endpoints: latency-svc-dqwn7 [467.641773ms] Mar 9 00:28:44.335: INFO: Created: latency-svc-z4s2h Mar 9 00:28:44.345: INFO: Got endpoints: latency-svc-z4s2h [468.391974ms] Mar 9 00:28:44.369: INFO: Created: latency-svc-zvxq7 Mar 9 00:28:44.375: INFO: Got endpoints: latency-svc-zvxq7 [455.862622ms] Mar 9 00:28:44.446: INFO: Created: latency-svc-g29b8 Mar 9 00:28:44.471: INFO: Got endpoints: latency-svc-g29b8 [533.141976ms] Mar 9 00:28:44.492: INFO: Created: latency-svc-6c8bg Mar 9 00:28:44.513: INFO: Got endpoints: latency-svc-6c8bg [533.339657ms] Mar 9 00:28:44.514: INFO: Created: latency-svc-gwcpj Mar 9 00:28:44.523: INFO: Got endpoints: latency-svc-gwcpj [526.369191ms] Mar 9 00:28:44.543: INFO: Created: latency-svc-jqsqj Mar 9 00:28:44.572: INFO: Got endpoints: latency-svc-jqsqj [503.387742ms] Mar 9 00:28:44.609: INFO: Created: latency-svc-7vj7g Mar 9 00:28:44.622: INFO: Got endpoints: latency-svc-7vj7g [544.909108ms] Mar 9 00:28:44.640: INFO: Created: latency-svc-d64dq Mar 9 00:28:44.646: INFO: Got endpoints: latency-svc-d64dq [533.166456ms] Mar 9 00:28:44.663: INFO: Created: latency-svc-xmqft Mar 9 00:28:44.698: INFO: Got endpoints: latency-svc-xmqft [560.748049ms] Mar 9 00:28:44.698: INFO: Created: latency-svc-g2n8j Mar 9 00:28:44.704: INFO: Got endpoints: latency-svc-g2n8j [509.115219ms] Mar 9 00:28:44.723: INFO: Created: latency-svc-97bsf Mar 9 00:28:44.729: INFO: Got endpoints: latency-svc-97bsf [522.978517ms] Mar 9 00:28:44.759: INFO: Created: latency-svc-4v9ch Mar 9 00:28:44.770: INFO: Got endpoints: latency-svc-4v9ch [521.540242ms] Mar 9 00:28:44.789: INFO: Created: latency-svc-h7v4d Mar 9 00:28:44.817: INFO: Got endpoints: latency-svc-h7v4d [549.901711ms] Mar 9 00:28:44.832: INFO: Created: latency-svc-pwsqh Mar 9 00:28:44.836: INFO: Got endpoints: latency-svc-pwsqh [527.956475ms] Mar 9 00:28:44.867: INFO: Created: latency-svc-7xld2 Mar 9 00:28:44.873: INFO: Got endpoints: latency-svc-7xld2 [557.805903ms] Mar 9 00:28:44.892: INFO: Created: latency-svc-25gh5 Mar 9 00:28:44.910: INFO: Got endpoints: latency-svc-25gh5 [564.653566ms] Mar 9 00:28:44.955: INFO: Created: latency-svc-rsnrk Mar 9 00:28:44.975: INFO: Created: latency-svc-9hf9f Mar 9 00:28:44.975: INFO: Got endpoints: latency-svc-rsnrk [600.127444ms] Mar 9 00:28:44.993: INFO: Got endpoints: latency-svc-9hf9f [522.523501ms] Mar 9 00:28:44.994: INFO: Created: latency-svc-ggm8r Mar 9 00:28:44.995: INFO: Got endpoints: latency-svc-ggm8r [482.86322ms] Mar 9 00:28:45.023: INFO: Created: latency-svc-psqjf Mar 9 00:28:45.031: INFO: Got endpoints: latency-svc-psqjf [508.34749ms] Mar 9 00:28:45.046: INFO: Created: latency-svc-vxtbb Mar 9 00:28:45.054: INFO: Got endpoints: latency-svc-vxtbb [481.838992ms] Mar 9 00:28:45.088: INFO: Created: latency-svc-b65kg Mar 9 00:28:45.096: INFO: Got endpoints: latency-svc-b65kg [473.652577ms] Mar 9 00:28:45.126: INFO: Created: latency-svc-76nnb Mar 9 00:28:45.130: INFO: Got endpoints: latency-svc-76nnb [484.013011ms] Mar 9 00:28:45.149: INFO: Created: latency-svc-kcnjc Mar 9 00:28:45.154: INFO: Got endpoints: latency-svc-kcnjc [455.984029ms] Mar 9 00:28:45.173: INFO: Created: latency-svc-75wlj Mar 9 00:28:45.178: INFO: Got endpoints: latency-svc-75wlj [473.378638ms] Mar 9 00:28:45.224: INFO: Created: latency-svc-mkh74 Mar 9 00:28:45.244: INFO: Created: latency-svc-nkcx7 Mar 9 00:28:45.245: INFO: Got endpoints: latency-svc-mkh74 [515.665759ms] Mar 9 00:28:45.256: INFO: Got endpoints: latency-svc-nkcx7 [485.469018ms] Mar 9 00:28:45.274: INFO: Created: latency-svc-fzphv Mar 9 00:28:45.286: INFO: Got endpoints: latency-svc-fzphv [468.450289ms] Mar 9 00:28:45.306: INFO: Created: latency-svc-jh7t5 Mar 9 00:28:45.323: INFO: Got endpoints: latency-svc-jh7t5 [487.180084ms] Mar 9 00:28:45.374: INFO: Created: latency-svc-fmr7s Mar 9 00:28:45.377: INFO: Got endpoints: latency-svc-fmr7s [504.541996ms] Mar 9 00:28:45.395: INFO: Created: latency-svc-xf78n Mar 9 00:28:45.401: INFO: Got endpoints: latency-svc-xf78n [491.561827ms] Mar 9 00:28:45.418: INFO: Created: latency-svc-xnsx7 Mar 9 00:28:45.425: INFO: Got endpoints: latency-svc-xnsx7 [449.537029ms] Mar 9 00:28:45.448: INFO: Created: latency-svc-jfqq4 Mar 9 00:28:45.461: INFO: Got endpoints: latency-svc-jfqq4 [467.839938ms] Mar 9 00:28:45.499: INFO: Created: latency-svc-v9bsb Mar 9 00:28:45.521: INFO: Created: latency-svc-wlzwx Mar 9 00:28:45.522: INFO: Got endpoints: latency-svc-v9bsb [526.458591ms] Mar 9 00:28:45.539: INFO: Got endpoints: latency-svc-wlzwx [507.60894ms] Mar 9 00:28:45.557: INFO: Created: latency-svc-m6tjq Mar 9 00:28:45.563: INFO: Got endpoints: latency-svc-m6tjq [509.527983ms] Mar 9 00:28:45.581: INFO: Created: latency-svc-wn9lv Mar 9 00:28:45.585: INFO: Got endpoints: latency-svc-wn9lv [489.153518ms] Mar 9 00:28:45.599: INFO: Created: latency-svc-vqlq6 Mar 9 00:28:45.613: INFO: Got endpoints: latency-svc-vqlq6 [482.982864ms] Mar 9 00:28:45.629: INFO: Created: latency-svc-6dsmr Mar 9 00:28:45.633: INFO: Got endpoints: latency-svc-6dsmr [479.266588ms] Mar 9 00:28:45.653: INFO: Created: latency-svc-h9vnp Mar 9 00:28:45.657: INFO: Got endpoints: latency-svc-h9vnp [479.322585ms] Mar 9 00:28:45.677: INFO: Created: latency-svc-rp4bh Mar 9 00:28:45.681: INFO: Got endpoints: latency-svc-rp4bh [436.349379ms] Mar 9 00:28:45.701: INFO: Created: latency-svc-ctnp9 Mar 9 00:28:45.705: INFO: Got endpoints: latency-svc-ctnp9 [449.316662ms] Mar 9 00:28:45.739: INFO: Created: latency-svc-hb6ld Mar 9 00:28:45.760: INFO: Got endpoints: latency-svc-hb6ld [474.663278ms] Mar 9 00:28:45.761: INFO: Created: latency-svc-7tcht Mar 9 00:28:45.778: INFO: Got endpoints: latency-svc-7tcht [455.001687ms] Mar 9 00:28:45.796: INFO: Created: latency-svc-9t7qt Mar 9 00:28:45.803: INFO: Got endpoints: latency-svc-9t7qt [425.268944ms] Mar 9 00:28:45.822: INFO: Created: latency-svc-lrf4f Mar 9 00:28:45.827: INFO: Got endpoints: latency-svc-lrf4f [425.357342ms] Mar 9 00:28:45.871: INFO: Created: latency-svc-r2c7l Mar 9 00:28:45.893: INFO: Created: latency-svc-w8q5r Mar 9 00:28:45.894: INFO: Got endpoints: latency-svc-r2c7l [468.528041ms] Mar 9 00:28:45.899: INFO: Got endpoints: latency-svc-w8q5r [437.414225ms] Mar 9 00:28:45.917: INFO: Created: latency-svc-gptvn Mar 9 00:28:45.923: INFO: Got endpoints: latency-svc-gptvn [400.68148ms] Mar 9 00:28:45.941: INFO: Created: latency-svc-frkvd Mar 9 00:28:45.971: INFO: Got endpoints: latency-svc-frkvd [431.462469ms] Mar 9 00:28:46.003: INFO: Created: latency-svc-qtrq8 Mar 9 00:28:46.019: INFO: Created: latency-svc-k2z7c Mar 9 00:28:46.019: INFO: Got endpoints: latency-svc-qtrq8 [455.494899ms] Mar 9 00:28:46.022: INFO: Got endpoints: latency-svc-k2z7c [437.218829ms] Mar 9 00:28:46.055: INFO: Created: latency-svc-npgrk Mar 9 00:28:46.079: INFO: Got endpoints: latency-svc-npgrk [465.745102ms] Mar 9 00:28:46.116: INFO: Created: latency-svc-qzwgd Mar 9 00:28:46.151: INFO: Got endpoints: latency-svc-qzwgd [517.981868ms] Mar 9 00:28:46.151: INFO: Created: latency-svc-xjhg5 Mar 9 00:28:46.161: INFO: Got endpoints: latency-svc-xjhg5 [503.614564ms] Mar 9 00:28:46.181: INFO: Created: latency-svc-jmzvf Mar 9 00:28:46.199: INFO: Created: latency-svc-dd5h5 Mar 9 00:28:46.199: INFO: Got endpoints: latency-svc-jmzvf [518.156543ms] Mar 9 00:28:46.242: INFO: Created: latency-svc-w5ndv Mar 9 00:28:46.242: INFO: Got endpoints: latency-svc-dd5h5 [536.953226ms] Mar 9 00:28:46.265: INFO: Created: latency-svc-6wcxb Mar 9 00:28:46.301: INFO: Created: latency-svc-gk6rf Mar 9 00:28:46.301: INFO: Got endpoints: latency-svc-w5ndv [540.721154ms] Mar 9 00:28:46.319: INFO: Created: latency-svc-st7jj Mar 9 00:28:46.362: INFO: Created: latency-svc-hcxt9 Mar 9 00:28:46.362: INFO: Got endpoints: latency-svc-6wcxb [583.566461ms] Mar 9 00:28:46.378: INFO: Created: latency-svc-vdgnm Mar 9 00:28:46.409: INFO: Created: latency-svc-2h8hw Mar 9 00:28:46.409: INFO: Got endpoints: latency-svc-gk6rf [606.293017ms] Mar 9 00:28:46.433: INFO: Created: latency-svc-h29f9 Mar 9 00:28:46.457: INFO: Created: latency-svc-kvncj Mar 9 00:28:46.457: INFO: Got endpoints: latency-svc-st7jj [630.801355ms] Mar 9 00:28:46.494: INFO: Got endpoints: latency-svc-hcxt9 [600.011212ms] Mar 9 00:28:46.494: INFO: Created: latency-svc-jqsjq Mar 9 00:28:46.547: INFO: Created: latency-svc-9xdfh Mar 9 00:28:46.547: INFO: Got endpoints: latency-svc-vdgnm [648.358756ms] Mar 9 00:28:46.613: INFO: Created: latency-svc-r7mlv Mar 9 00:28:46.613: INFO: Got endpoints: latency-svc-2h8hw [690.60456ms] Mar 9 00:28:46.630: INFO: Created: latency-svc-zdjzg Mar 9 00:28:46.661: INFO: Got endpoints: latency-svc-h29f9 [690.299601ms] Mar 9 00:28:46.662: INFO: Created: latency-svc-sbrmh Mar 9 00:28:46.697: INFO: Got endpoints: latency-svc-kvncj [678.570871ms] Mar 9 00:28:46.697: INFO: Created: latency-svc-jpppt Mar 9 00:28:46.757: INFO: Got endpoints: latency-svc-jqsjq [734.739573ms] Mar 9 00:28:46.762: INFO: Created: latency-svc-zlv49 Mar 9 00:28:46.792: INFO: Got endpoints: latency-svc-9xdfh [712.57094ms] Mar 9 00:28:46.810: INFO: Created: latency-svc-f8mlz Mar 9 00:28:46.853: INFO: Created: latency-svc-qr8wj Mar 9 00:28:46.853: INFO: Got endpoints: latency-svc-r7mlv [701.573757ms] Mar 9 00:28:46.884: INFO: Created: latency-svc-6crgd Mar 9 00:28:46.906: INFO: Got endpoints: latency-svc-zdjzg [745.321406ms] Mar 9 00:28:46.931: INFO: Created: latency-svc-jlqnc Mar 9 00:28:46.949: INFO: Got endpoints: latency-svc-sbrmh [749.667644ms] Mar 9 00:28:46.949: INFO: Created: latency-svc-pr9gn Mar 9 00:28:47.009: INFO: Got endpoints: latency-svc-jpppt [766.92528ms] Mar 9 00:28:47.009: INFO: Created: latency-svc-ld99c Mar 9 00:28:47.026: INFO: Created: latency-svc-kt5gd Mar 9 00:28:47.044: INFO: Got endpoints: latency-svc-zlv49 [743.269454ms] Mar 9 00:28:47.045: INFO: Created: latency-svc-457fp Mar 9 00:28:47.087: INFO: Created: latency-svc-2kkp7 Mar 9 00:28:47.096: INFO: Got endpoints: latency-svc-f8mlz [733.461564ms] Mar 9 00:28:47.134: INFO: Created: latency-svc-tsc57 Mar 9 00:28:47.171: INFO: Created: latency-svc-c5gfx Mar 9 00:28:47.171: INFO: Got endpoints: latency-svc-qr8wj [761.926729ms] Mar 9 00:28:47.201: INFO: Created: latency-svc-zr6wl Mar 9 00:28:47.201: INFO: Got endpoints: latency-svc-6crgd [743.669826ms] Mar 9 00:28:47.255: INFO: Got endpoints: latency-svc-jlqnc [761.580452ms] Mar 9 00:28:47.255: INFO: Created: latency-svc-2t9cn Mar 9 00:28:47.279: INFO: Created: latency-svc-8prvx Mar 9 00:28:47.303: INFO: Created: latency-svc-8c8x4 Mar 9 00:28:47.303: INFO: Got endpoints: latency-svc-pr9gn [756.076484ms] Mar 9 00:28:47.345: INFO: Got endpoints: latency-svc-ld99c [731.446512ms] Mar 9 00:28:47.345: INFO: Created: latency-svc-tk6xt Mar 9 00:28:47.393: INFO: Got endpoints: latency-svc-kt5gd [731.97455ms] Mar 9 00:28:47.393: INFO: Created: latency-svc-mmd22 Mar 9 00:28:47.418: INFO: Created: latency-svc-rvbxw Mar 9 00:28:47.447: INFO: Created: latency-svc-2tq4d Mar 9 00:28:47.447: INFO: Got endpoints: latency-svc-457fp [749.726503ms] Mar 9 00:28:47.494: INFO: Created: latency-svc-t86dv Mar 9 00:28:47.494: INFO: Got endpoints: latency-svc-2kkp7 [736.374921ms] Mar 9 00:28:47.519: INFO: Created: latency-svc-xnhsf Mar 9 00:28:47.543: INFO: Got endpoints: latency-svc-tsc57 [751.113659ms] Mar 9 00:28:47.561: INFO: Created: latency-svc-jtfcr Mar 9 00:28:47.588: INFO: Created: latency-svc-j7rdc Mar 9 00:28:47.593: INFO: Got endpoints: latency-svc-c5gfx [740.586779ms] Mar 9 00:28:47.639: INFO: Created: latency-svc-g7s9b Mar 9 00:28:47.647: INFO: Got endpoints: latency-svc-zr6wl [740.834996ms] Mar 9 00:28:47.664: INFO: Created: latency-svc-vql7h Mar 9 00:28:47.693: INFO: Got endpoints: latency-svc-2t9cn [743.652062ms] Mar 9 00:28:47.693: INFO: Created: latency-svc-8nmpn Mar 9 00:28:47.739: INFO: Created: latency-svc-hnshq Mar 9 00:28:47.777: INFO: Created: latency-svc-k6khs Mar 9 00:28:47.777: INFO: Got endpoints: latency-svc-8prvx [768.448853ms] Mar 9 00:28:47.801: INFO: Got endpoints: latency-svc-8c8x4 [757.00453ms] Mar 9 00:28:47.820: INFO: Created: latency-svc-7sfhx Mar 9 00:28:47.871: INFO: Got endpoints: latency-svc-tk6xt [775.379495ms] Mar 9 00:28:47.871: INFO: Created: latency-svc-rh5r2 Mar 9 00:28:47.915: INFO: Created: latency-svc-lttfw Mar 9 00:28:47.916: INFO: Got endpoints: latency-svc-mmd22 [744.697463ms] Mar 9 00:28:47.945: INFO: Created: latency-svc-zzfnk Mar 9 00:28:47.946: INFO: Got endpoints: latency-svc-rvbxw [744.417651ms] Mar 9 00:28:47.990: INFO: Created: latency-svc-hd57r Mar 9 00:28:47.991: INFO: Got endpoints: latency-svc-2tq4d [735.335449ms] Mar 9 00:28:48.024: INFO: Created: latency-svc-drl2r Mar 9 00:28:48.041: INFO: Got endpoints: latency-svc-t86dv [738.024077ms] Mar 9 00:28:48.077: INFO: Created: latency-svc-rqksn Mar 9 00:28:48.110: INFO: Got endpoints: latency-svc-xnhsf [765.46887ms] Mar 9 00:28:48.131: INFO: Created: latency-svc-hcfrh Mar 9 00:28:48.141: INFO: Got endpoints: latency-svc-jtfcr [748.521498ms] Mar 9 00:28:48.174: INFO: Created: latency-svc-jdxl2 Mar 9 00:28:48.191: INFO: Got endpoints: latency-svc-j7rdc [743.857913ms] Mar 9 00:28:48.242: INFO: Got endpoints: latency-svc-g7s9b [748.303262ms] Mar 9 00:28:48.252: INFO: Created: latency-svc-59d8j Mar 9 00:28:48.275: INFO: Created: latency-svc-7mdwf Mar 9 00:28:48.291: INFO: Got endpoints: latency-svc-vql7h [748.161115ms] Mar 9 00:28:48.329: INFO: Created: latency-svc-hj9s4 Mar 9 00:28:48.362: INFO: Got endpoints: latency-svc-8nmpn [768.381783ms] Mar 9 00:28:48.389: INFO: Created: latency-svc-cjqrz Mar 9 00:28:48.398: INFO: Got endpoints: latency-svc-hnshq [750.721491ms] Mar 9 00:28:48.425: INFO: Created: latency-svc-fjv7p Mar 9 00:28:48.441: INFO: Got endpoints: latency-svc-k6khs [748.131256ms] Mar 9 00:28:48.482: INFO: Created: latency-svc-bjvzz Mar 9 00:28:48.491: INFO: Got endpoints: latency-svc-7sfhx [713.372912ms] Mar 9 00:28:48.533: INFO: Created: latency-svc-kx6hr Mar 9 00:28:48.541: INFO: Got endpoints: latency-svc-rh5r2 [739.726848ms] Mar 9 00:28:48.601: INFO: Created: latency-svc-dmbjf Mar 9 00:28:48.601: INFO: Got endpoints: latency-svc-lttfw [730.40592ms] Mar 9 00:28:48.623: INFO: Created: latency-svc-p8jrc Mar 9 00:28:48.641: INFO: Got endpoints: latency-svc-zzfnk [725.422111ms] Mar 9 00:28:48.665: INFO: Created: latency-svc-xdsmc Mar 9 00:28:48.691: INFO: Got endpoints: latency-svc-hd57r [745.475953ms] Mar 9 00:28:48.734: INFO: Created: latency-svc-r7qqd Mar 9 00:28:48.743: INFO: Got endpoints: latency-svc-drl2r [752.295236ms] Mar 9 00:28:48.776: INFO: Created: latency-svc-rmh8l Mar 9 00:28:48.791: INFO: Got endpoints: latency-svc-rqksn [749.848881ms] Mar 9 00:28:48.883: INFO: Created: latency-svc-fv2gn Mar 9 00:28:48.883: INFO: Got endpoints: latency-svc-hcfrh [772.554366ms] Mar 9 00:28:48.905: INFO: Got endpoints: latency-svc-jdxl2 [763.42561ms] Mar 9 00:28:48.905: INFO: Created: latency-svc-2lcsg Mar 9 00:28:48.929: INFO: Created: latency-svc-wrqgn Mar 9 00:28:48.941: INFO: Got endpoints: latency-svc-59d8j [750.310002ms] Mar 9 00:28:48.965: INFO: Created: latency-svc-2mmf2 Mar 9 00:28:48.996: INFO: Got endpoints: latency-svc-7mdwf [754.343025ms] Mar 9 00:28:49.019: INFO: Created: latency-svc-9sk4w Mar 9 00:28:49.041: INFO: Got endpoints: latency-svc-hj9s4 [749.74638ms] Mar 9 00:28:49.061: INFO: Created: latency-svc-8hph5 Mar 9 00:28:49.091: INFO: Got endpoints: latency-svc-cjqrz [729.339596ms] Mar 9 00:28:49.129: INFO: Created: latency-svc-6k7hq Mar 9 00:28:49.145: INFO: Got endpoints: latency-svc-fjv7p [747.039796ms] Mar 9 00:28:49.169: INFO: Created: latency-svc-zklfh Mar 9 00:28:49.191: INFO: Got endpoints: latency-svc-bjvzz [750.115167ms] Mar 9 00:28:49.212: INFO: Created: latency-svc-mj5wb Mar 9 00:28:49.255: INFO: Got endpoints: latency-svc-kx6hr [763.65915ms] Mar 9 00:28:49.277: INFO: Created: latency-svc-gjtkg Mar 9 00:28:49.313: INFO: Got endpoints: latency-svc-dmbjf [771.979904ms] Mar 9 00:28:49.368: INFO: Created: latency-svc-drbwr Mar 9 00:28:49.368: INFO: Got endpoints: latency-svc-p8jrc [766.931576ms] Mar 9 00:28:49.390: INFO: Created: latency-svc-z778p Mar 9 00:28:49.396: INFO: Got endpoints: latency-svc-xdsmc [754.828509ms] Mar 9 00:28:49.433: INFO: Created: latency-svc-clfpl Mar 9 00:28:49.457: INFO: Got endpoints: latency-svc-r7qqd [766.160996ms] Mar 9 00:28:49.493: INFO: Created: latency-svc-n84dw Mar 9 00:28:49.493: INFO: Got endpoints: latency-svc-rmh8l [750.50834ms] Mar 9 00:28:49.516: INFO: Created: latency-svc-5lzhv Mar 9 00:28:49.546: INFO: Got endpoints: latency-svc-fv2gn [755.319357ms] Mar 9 00:28:49.565: INFO: Created: latency-svc-jkn7j Mar 9 00:28:49.591: INFO: Got endpoints: latency-svc-2lcsg [707.917564ms] Mar 9 00:28:49.631: INFO: Created: latency-svc-dmpdc Mar 9 00:28:49.641: INFO: Got endpoints: latency-svc-wrqgn [736.040599ms] Mar 9 00:28:49.661: INFO: Created: latency-svc-4757q Mar 9 00:28:49.691: INFO: Got endpoints: latency-svc-2mmf2 [749.614672ms] Mar 9 00:28:49.739: INFO: Created: latency-svc-fg682 Mar 9 00:28:49.745: INFO: Got endpoints: latency-svc-9sk4w [748.977911ms] Mar 9 00:28:49.769: INFO: Created: latency-svc-55rj6 Mar 9 00:28:49.791: INFO: Got endpoints: latency-svc-8hph5 [749.985054ms] Mar 9 00:28:49.817: INFO: Created: latency-svc-tfj4v Mar 9 00:28:50.033: INFO: Got endpoints: latency-svc-6k7hq [941.400869ms] Mar 9 00:28:50.034: INFO: Got endpoints: latency-svc-zklfh [888.889255ms] Mar 9 00:28:50.034: INFO: Got endpoints: latency-svc-mj5wb [842.460571ms] Mar 9 00:28:50.034: INFO: Got endpoints: latency-svc-gjtkg [779.267036ms] Mar 9 00:28:50.063: INFO: Created: latency-svc-knfpc Mar 9 00:28:50.063: INFO: Got endpoints: latency-svc-drbwr [749.936905ms] Mar 9 00:28:50.087: INFO: Created: latency-svc-48sn4 Mar 9 00:28:50.105: INFO: Got endpoints: latency-svc-z778p [736.477824ms] Mar 9 00:28:50.123: INFO: Created: latency-svc-pnb6c Mar 9 00:28:50.297: INFO: Got endpoints: latency-svc-n84dw [839.817992ms] Mar 9 00:28:50.297: INFO: Got endpoints: latency-svc-clfpl [901.416208ms] Mar 9 00:28:50.298: INFO: Created: latency-svc-c8fmt Mar 9 00:28:50.298: INFO: Got endpoints: latency-svc-jkn7j [751.60114ms] Mar 9 00:28:50.298: INFO: Got endpoints: latency-svc-5lzhv [804.48717ms] Mar 9 00:28:50.344: INFO: Created: latency-svc-54l9j Mar 9 00:28:50.344: INFO: Got endpoints: latency-svc-dmpdc [753.492071ms] Mar 9 00:28:50.363: INFO: Created: latency-svc-xpkkh Mar 9 00:28:50.380: INFO: Created: latency-svc-g2flr Mar 9 00:28:50.422: INFO: Got endpoints: latency-svc-4757q [780.893173ms] Mar 9 00:28:50.423: INFO: Created: latency-svc-p8jn8 Mar 9 00:28:50.448: INFO: Created: latency-svc-hmqx4 Mar 9 00:28:50.448: INFO: Got endpoints: latency-svc-fg682 [756.607571ms] Mar 9 00:28:50.487: INFO: Created: latency-svc-zxhzl Mar 9 00:28:50.507: INFO: Got endpoints: latency-svc-55rj6 [761.765591ms] Mar 9 00:28:50.554: INFO: Created: latency-svc-lgjdm Mar 9 00:28:50.555: INFO: Got endpoints: latency-svc-tfj4v [763.688706ms] Mar 9 00:28:50.597: INFO: Created: latency-svc-4fsp6 Mar 9 00:28:50.597: INFO: Got endpoints: latency-svc-knfpc [564.587025ms] Mar 9 00:28:50.621: INFO: Created: latency-svc-vwclk Mar 9 00:28:50.685: INFO: Created: latency-svc-gvglt Mar 9 00:28:50.685: INFO: Got endpoints: latency-svc-48sn4 [651.790608ms] Mar 9 00:28:50.717: INFO: Got endpoints: latency-svc-pnb6c [683.076217ms] Mar 9 00:28:50.717: INFO: Created: latency-svc-q2h5k Mar 9 00:28:50.748: INFO: Created: latency-svc-mvgdl Mar 9 00:28:50.748: INFO: Got endpoints: latency-svc-c8fmt [714.248373ms] Mar 9 00:28:50.778: INFO: Created: latency-svc-g9kf2 Mar 9 00:28:50.801: INFO: Got endpoints: latency-svc-54l9j [738.311821ms] Mar 9 00:28:50.837: INFO: Created: latency-svc-kkh2f Mar 9 00:28:50.852: INFO: Got endpoints: latency-svc-xpkkh [747.207705ms] Mar 9 00:28:50.879: INFO: Created: latency-svc-tb7fd Mar 9 00:28:50.897: INFO: Created: latency-svc-2js4g Mar 9 00:28:50.897: INFO: Got endpoints: latency-svc-g2flr [599.898124ms] Mar 9 00:28:50.939: INFO: Created: latency-svc-cl4ld Mar 9 00:28:50.942: INFO: Got endpoints: latency-svc-p8jn8 [644.356684ms] Mar 9 00:28:50.963: INFO: Created: latency-svc-hh6j9 Mar 9 00:28:50.994: INFO: Got endpoints: latency-svc-hmqx4 [695.561444ms] Mar 9 00:28:50.994: INFO: Created: latency-svc-fkrmj Mar 9 00:28:51.047: INFO: Got endpoints: latency-svc-zxhzl [625.31127ms] Mar 9 00:28:51.048: INFO: Created: latency-svc-2p42m Mar 9 00:28:51.070: INFO: Created: latency-svc-6h492 Mar 9 00:28:51.091: INFO: Got endpoints: latency-svc-lgjdm [792.996081ms] Mar 9 00:28:51.113: INFO: Created: latency-svc-rbtgg Mar 9 00:28:51.141: INFO: Got endpoints: latency-svc-4fsp6 [797.009733ms] Mar 9 00:28:51.179: INFO: Created: latency-svc-8v2hj Mar 9 00:28:51.194: INFO: Got endpoints: latency-svc-vwclk [746.58525ms] Mar 9 00:28:51.215: INFO: Created: latency-svc-226fp Mar 9 00:28:51.241: INFO: Got endpoints: latency-svc-gvglt [733.722084ms] Mar 9 00:28:51.291: INFO: Got endpoints: latency-svc-q2h5k [736.522952ms] Mar 9 00:28:51.341: INFO: Got endpoints: latency-svc-mvgdl [743.789461ms] Mar 9 00:28:51.404: INFO: Got endpoints: latency-svc-g9kf2 [718.323617ms] Mar 9 00:28:51.441: INFO: Got endpoints: latency-svc-kkh2f [723.779831ms] Mar 9 00:28:51.492: INFO: Got endpoints: latency-svc-tb7fd [743.963994ms] Mar 9 00:28:51.543: INFO: Got endpoints: latency-svc-2js4g [741.30035ms] Mar 9 00:28:51.591: INFO: Got endpoints: latency-svc-cl4ld [738.454745ms] Mar 9 00:28:51.641: INFO: Got endpoints: latency-svc-hh6j9 [743.987208ms] Mar 9 00:28:51.691: INFO: Got endpoints: latency-svc-fkrmj [749.104277ms] Mar 9 00:28:51.757: INFO: Got endpoints: latency-svc-2p42m [763.502658ms] Mar 9 00:28:51.791: INFO: Got endpoints: latency-svc-6h492 [744.016489ms] Mar 9 00:28:51.841: INFO: Got endpoints: latency-svc-rbtgg [749.811383ms] Mar 9 00:28:51.891: INFO: Got endpoints: latency-svc-8v2hj [749.43691ms] Mar 9 00:28:51.941: INFO: Got endpoints: latency-svc-226fp [746.465415ms] Mar 9 00:28:51.941: INFO: Latencies: [44.388791ms 62.589113ms 87.319755ms 129.687874ms 164.928412ms 188.177197ms 203.637875ms 266.066597ms 293.562108ms 317.361485ms 344.905996ms 398.247632ms 400.68148ms 404.391133ms 425.268944ms 425.357342ms 431.462469ms 435.042581ms 436.349379ms 437.218829ms 437.414225ms 449.316662ms 449.537029ms 454.759537ms 455.001687ms 455.494899ms 455.862622ms 455.984029ms 456.444211ms 458.636379ms 462.952581ms 463.039685ms 464.514943ms 465.745102ms 467.641773ms 467.839938ms 468.391974ms 468.450289ms 468.528041ms 473.378638ms 473.652577ms 474.663278ms 476.303053ms 477.153368ms 479.266588ms 479.322585ms 479.739755ms 481.838992ms 482.86322ms 482.982864ms 484.013011ms 485.469018ms 487.180084ms 489.153518ms 489.183523ms 491.121357ms 491.260979ms 491.418841ms 491.561827ms 497.126174ms 503.387742ms 503.614564ms 504.541996ms 507.60894ms 508.34749ms 509.115219ms 509.527983ms 515.665759ms 517.981868ms 518.156543ms 521.540242ms 522.523501ms 522.978517ms 526.369191ms 526.458591ms 527.956475ms 533.141976ms 533.166456ms 533.339657ms 536.953226ms 540.721154ms 544.909108ms 549.901711ms 557.805903ms 560.748049ms 564.587025ms 564.653566ms 583.566461ms 599.898124ms 600.011212ms 600.127444ms 606.293017ms 625.31127ms 630.801355ms 644.356684ms 648.358756ms 651.790608ms 678.570871ms 683.076217ms 690.299601ms 690.60456ms 695.561444ms 701.573757ms 707.917564ms 712.57094ms 713.372912ms 714.248373ms 718.323617ms 723.779831ms 725.422111ms 729.339596ms 730.40592ms 731.446512ms 731.97455ms 733.461564ms 733.722084ms 734.739573ms 735.335449ms 736.040599ms 736.374921ms 736.477824ms 736.522952ms 738.024077ms 738.311821ms 738.454745ms 739.726848ms 740.586779ms 740.834996ms 741.30035ms 743.269454ms 743.652062ms 743.669826ms 743.789461ms 743.857913ms 743.963994ms 743.987208ms 744.016489ms 744.417651ms 744.697463ms 745.321406ms 745.475953ms 746.465415ms 746.58525ms 747.039796ms 747.207705ms 748.131256ms 748.161115ms 748.303262ms 748.521498ms 748.977911ms 749.104277ms 749.43691ms 749.614672ms 749.667644ms 749.726503ms 749.74638ms 749.811383ms 749.848881ms 749.936905ms 749.985054ms 750.115167ms 750.310002ms 750.50834ms 750.721491ms 751.113659ms 751.60114ms 752.295236ms 753.492071ms 754.343025ms 754.828509ms 755.319357ms 756.076484ms 756.607571ms 757.00453ms 761.580452ms 761.765591ms 761.926729ms 763.42561ms 763.502658ms 763.65915ms 763.688706ms 765.46887ms 766.160996ms 766.92528ms 766.931576ms 768.381783ms 768.448853ms 771.979904ms 772.554366ms 775.379495ms 779.267036ms 780.893173ms 792.996081ms 797.009733ms 804.48717ms 839.817992ms 842.460571ms 888.889255ms 901.416208ms 941.400869ms] Mar 9 00:28:51.941: INFO: 50 %ile: 690.60456ms Mar 9 00:28:51.941: INFO: 90 %ile: 763.688706ms Mar 9 00:28:51.941: INFO: 99 %ile: 901.416208ms Mar 9 00:28:51.941: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:28:51.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-125" for this suite. • [SLOW TEST:10.881 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":280,"completed":236,"skipped":3949,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:28:51.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 9 00:28:52.004: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 9 00:29:02.718: INFO: >>> kubeConfig: /root/.kube/config Mar 9 00:29:04.940: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:29:15.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9578" for this suite. • [SLOW TEST:23.807 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":237,"skipped":3970,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:29:15.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-ed3aabf5-0edb-46f7-9691-4000842b240d STEP: Creating secret with name s-test-opt-upd-4ab6756f-0161-4d59-8b3b-41bf0a33e113 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ed3aabf5-0edb-46f7-9691-4000842b240d STEP: Updating secret s-test-opt-upd-4ab6756f-0161-4d59-8b3b-41bf0a33e113 STEP: Creating secret with name s-test-opt-create-f0aebc17-11a8-4a25-bb24-727102719a77 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:30:26.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8438" for this suite. • [SLOW TEST:70.580 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3975,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:30:26.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8008 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8008;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8008 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8008;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8008.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8008.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8008.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8008.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8008.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8008.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.230_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8008 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8008;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8008 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8008;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8008.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8008.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8008.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8008.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8008.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8008.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8008.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8008.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.230_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:30:30.473: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.476: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.479: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.481: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.495: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.500: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.502: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.519: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.522: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.524: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.527: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.529: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.531: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.534: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.536: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:30.552: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:30:35.557: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.560: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.571: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.574: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.577: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.597: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.599: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.601: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.606: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.609: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.611: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.613: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:35.629: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:30:40.556: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.559: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.563: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.604: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.607: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.609: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.611: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.613: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.615: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.618: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.620: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:40.634: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:30:45.557: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.560: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.576: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.601: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.603: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.608: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.614: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.617: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.619: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.622: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:45.637: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:30:50.556: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.559: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.588: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.590: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.592: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.596: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.600: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.602: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:50.614: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:30:55.563: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.567: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.572: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.574: INFO: Unable to read wheezy_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.612: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.614: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.617: INFO: Unable to read jessie_udp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.619: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008 from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.621: INFO: Unable to read jessie_udp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.623: INFO: Unable to read jessie_tcp@dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.625: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.627: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc from pod dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3: the server could not find the requested resource (get pods dns-test-e00e965c-8843-47a2-9224-a03c1581ede3) Mar 9 00:30:55.640: INFO: Lookups using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8008 wheezy_tcp@dns-test-service.dns-8008 wheezy_udp@dns-test-service.dns-8008.svc wheezy_tcp@dns-test-service.dns-8008.svc wheezy_udp@_http._tcp.dns-test-service.dns-8008.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8008.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8008 jessie_tcp@dns-test-service.dns-8008 jessie_udp@dns-test-service.dns-8008.svc jessie_tcp@dns-test-service.dns-8008.svc jessie_udp@_http._tcp.dns-test-service.dns-8008.svc jessie_tcp@_http._tcp.dns-test-service.dns-8008.svc] Mar 9 00:31:00.636: INFO: DNS probes using dns-8008/dns-test-e00e965c-8843-47a2-9224-a03c1581ede3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:31:00.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8008" for this suite. • [SLOW TEST:34.471 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":239,"skipped":3987,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:31:00.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-n5v7s in namespace proxy-7205 I0309 00:31:00.921019 7 runners.go:189] Created replication controller with name: proxy-service-n5v7s, namespace: proxy-7205, replica count: 1 I0309 00:31:01.971491 7 runners.go:189] proxy-service-n5v7s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0309 00:31:02.971770 7 runners.go:189] proxy-service-n5v7s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0309 00:31:03.971993 7 runners.go:189] proxy-service-n5v7s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0309 00:31:04.972202 7 runners.go:189] proxy-service-n5v7s Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 00:31:04.975: INFO: setup took 4.09623389s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 9 00:31:04.988: INFO: (0) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 13.011763ms) Mar 9 00:31:04.989: INFO: (0) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 13.532761ms) Mar 9 00:31:04.989: INFO: (0) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 13.70417ms) Mar 9 00:31:04.989: INFO: (0) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 14.170675ms) Mar 9 00:31:04.991: INFO: (0) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 16.057139ms) Mar 9 00:31:04.992: INFO: (0) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 16.595888ms) Mar 9 00:31:04.993: INFO: (0) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 17.427157ms) Mar 9 00:31:04.994: INFO: (0) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 19.191398ms) Mar 9 00:31:04.995: INFO: (0) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 19.223172ms) Mar 9 00:31:04.998: INFO: (0) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 23.056117ms) Mar 9 00:31:04.998: INFO: (0) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 23.205313ms) Mar 9 00:31:05.001: INFO: (0) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 25.924258ms) Mar 9 00:31:05.001: INFO: (0) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 25.601996ms) Mar 9 00:31:05.001: INFO: (0) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 25.883156ms) Mar 9 00:31:05.001: INFO: (0) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 8.339832ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 8.262871ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 8.319705ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 8.284782ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 8.507226ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 8.398278ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 8.724297ms) Mar 9 00:31:05.010: INFO: (1) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 8.456818ms) Mar 9 00:31:05.014: INFO: (2) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 3.294988ms) Mar 9 00:31:05.015: INFO: (2) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.801305ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.213088ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.336206ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 5.32324ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.44056ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 5.295503ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 5.302482ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 5.331818ms) Mar 9 00:31:05.016: INFO: (2) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.424294ms) Mar 9 00:31:05.018: INFO: (2) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 7.028204ms) Mar 9 00:31:05.022: INFO: (2) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 11.589402ms) Mar 9 00:31:05.022: INFO: (2) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 11.673705ms) Mar 9 00:31:05.022: INFO: (2) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 11.784812ms) Mar 9 00:31:05.022: INFO: (2) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 11.837994ms) Mar 9 00:31:05.026: INFO: (3) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 3.717002ms) Mar 9 00:31:05.027: INFO: (3) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.052614ms) Mar 9 00:31:05.027: INFO: (3) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 3.974721ms) Mar 9 00:31:05.027: INFO: (3) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.278552ms) Mar 9 00:31:05.029: INFO: (3) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 6.188721ms) Mar 9 00:31:05.029: INFO: (3) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 6.442668ms) Mar 9 00:31:05.029: INFO: (3) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 7.239388ms) Mar 9 00:31:05.030: INFO: (3) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 7.271711ms) Mar 9 00:31:05.030: INFO: (3) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 7.295693ms) Mar 9 00:31:05.031: INFO: (3) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 8.149749ms) Mar 9 00:31:05.031: INFO: (3) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 8.602315ms) Mar 9 00:31:05.031: INFO: (3) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 8.749552ms) Mar 9 00:31:05.031: INFO: (3) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 8.754759ms) Mar 9 00:31:05.031: INFO: (3) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 8.874094ms) Mar 9 00:31:05.036: INFO: (4) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 4.749751ms) Mar 9 00:31:05.036: INFO: (4) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 4.61249ms) Mar 9 00:31:05.036: INFO: (4) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.663427ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 4.991209ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 5.184188ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.280385ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test (200; 5.380855ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.744629ms) Mar 9 00:31:05.037: INFO: (4) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.842177ms) Mar 9 00:31:05.041: INFO: (4) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 8.889036ms) Mar 9 00:31:05.041: INFO: (4) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 9.238384ms) Mar 9 00:31:05.042: INFO: (4) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 10.158157ms) Mar 9 00:31:05.058: INFO: (4) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 26.201197ms) Mar 9 00:31:05.058: INFO: (4) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 26.466851ms) Mar 9 00:31:05.059: INFO: (4) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 27.04136ms) Mar 9 00:31:05.062: INFO: (5) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 3.522576ms) Mar 9 00:31:05.063: INFO: (5) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 4.314323ms) Mar 9 00:31:05.063: INFO: (5) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 4.525177ms) Mar 9 00:31:05.064: INFO: (5) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 5.50959ms) Mar 9 00:31:05.065: INFO: (5) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.476211ms) Mar 9 00:31:05.065: INFO: (5) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 5.84592ms) Mar 9 00:31:05.065: INFO: (5) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 5.86223ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 6.582427ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 6.440919ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 6.621219ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 6.91667ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 6.926472ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 6.985974ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 7.048637ms) Mar 9 00:31:05.066: INFO: (5) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 7.123106ms) Mar 9 00:31:05.070: INFO: (6) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 3.960661ms) Mar 9 00:31:05.071: INFO: (6) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.700639ms) Mar 9 00:31:05.071: INFO: (6) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 4.773489ms) Mar 9 00:31:05.071: INFO: (6) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 5.125099ms) Mar 9 00:31:05.071: INFO: (6) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 5.100843ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 5.316691ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 5.6536ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 5.825116ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.982077ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.950857ms) Mar 9 00:31:05.072: INFO: (6) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 4.976043ms) Mar 9 00:31:05.078: INFO: (7) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 5.027196ms) Mar 9 00:31:05.078: INFO: (7) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 5.130615ms) Mar 9 00:31:05.079: INFO: (7) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 6.045884ms) Mar 9 00:31:05.081: INFO: (7) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 7.565411ms) Mar 9 00:31:05.081: INFO: (7) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 7.530858ms) Mar 9 00:31:05.081: INFO: (7) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 7.507156ms) Mar 9 00:31:05.081: INFO: (7) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 7.814617ms) Mar 9 00:31:05.086: INFO: (8) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 4.554665ms) Mar 9 00:31:05.086: INFO: (8) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 4.630489ms) Mar 9 00:31:05.087: INFO: (8) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.794168ms) Mar 9 00:31:05.087: INFO: (8) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 5.732013ms) Mar 9 00:31:05.087: INFO: (8) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 5.874287ms) Mar 9 00:31:05.087: INFO: (8) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test (200; 6.354498ms) Mar 9 00:31:05.088: INFO: (8) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 6.427571ms) Mar 9 00:31:05.088: INFO: (8) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 6.957535ms) Mar 9 00:31:05.088: INFO: (8) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 6.976882ms) Mar 9 00:31:05.088: INFO: (8) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 7.114248ms) Mar 9 00:31:05.089: INFO: (8) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 7.324252ms) Mar 9 00:31:05.089: INFO: (8) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 7.475786ms) Mar 9 00:31:05.092: INFO: (9) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 2.812516ms) Mar 9 00:31:05.099: INFO: (9) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 9.872219ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 11.256306ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 11.338998ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 11.223855ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 11.307786ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 11.306513ms) Mar 9 00:31:05.100: INFO: (9) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 2.967666ms) Mar 9 00:31:05.104: INFO: (10) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 3.112764ms) Mar 9 00:31:05.104: INFO: (10) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 3.648163ms) Mar 9 00:31:05.107: INFO: (10) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.910489ms) Mar 9 00:31:05.107: INFO: (10) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 6.03801ms) Mar 9 00:31:05.107: INFO: (10) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 6.036737ms) Mar 9 00:31:05.107: INFO: (10) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test (200; 6.615637ms) Mar 9 00:31:05.107: INFO: (10) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 6.490503ms) Mar 9 00:31:05.110: INFO: (11) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 2.857773ms) Mar 9 00:31:05.110: INFO: (11) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 2.782846ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 4.10286ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 4.572755ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 4.606551ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 4.621749ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 4.714512ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 4.727925ms) Mar 9 00:31:05.112: INFO: (11) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.901966ms) Mar 9 00:31:05.113: INFO: (11) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.10467ms) Mar 9 00:31:05.113: INFO: (11) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 5.680815ms) Mar 9 00:31:05.113: INFO: (11) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 5.877794ms) Mar 9 00:31:05.113: INFO: (11) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.745227ms) Mar 9 00:31:05.113: INFO: (11) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 6.095693ms) Mar 9 00:31:05.114: INFO: (11) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 6.210486ms) Mar 9 00:31:05.116: INFO: (12) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 2.588245ms) Mar 9 00:31:05.116: INFO: (12) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 2.87406ms) Mar 9 00:31:05.117: INFO: (12) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test (200; 4.446667ms) Mar 9 00:31:05.118: INFO: (12) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 4.399052ms) Mar 9 00:31:05.118: INFO: (12) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 4.549507ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.030159ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 5.129018ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 5.107725ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.25471ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 5.191269ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 5.158034ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.284688ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 5.620469ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 5.707085ms) Mar 9 00:31:05.119: INFO: (12) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 5.794901ms) Mar 9 00:31:05.122: INFO: (13) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 2.480395ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.154221ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 4.248959ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 4.380674ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.769123ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 4.807681ms) Mar 9 00:31:05.124: INFO: (13) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 4.802078ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 6.50678ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 6.528083ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 6.532153ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 6.599394ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 6.525006ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 6.564984ms) Mar 9 00:31:05.126: INFO: (13) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 4.850169ms) Mar 9 00:31:05.132: INFO: (14) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 4.938173ms) Mar 9 00:31:05.132: INFO: (14) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.911528ms) Mar 9 00:31:05.132: INFO: (14) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 4.886493ms) Mar 9 00:31:05.132: INFO: (14) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 4.915296ms) Mar 9 00:31:05.133: INFO: (14) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 5.775967ms) Mar 9 00:31:05.133: INFO: (14) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 6.057475ms) Mar 9 00:31:05.133: INFO: (14) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 6.096591ms) Mar 9 00:31:05.133: INFO: (14) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 6.271072ms) Mar 9 00:31:05.136: INFO: (15) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 2.837266ms) Mar 9 00:31:05.136: INFO: (15) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 2.879083ms) Mar 9 00:31:05.136: INFO: (15) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 4.873016ms) Mar 9 00:31:05.138: INFO: (15) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.073599ms) Mar 9 00:31:05.138: INFO: (15) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.160593ms) Mar 9 00:31:05.138: INFO: (15) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 5.23961ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 5.634839ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 5.763315ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 5.713726ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 5.870532ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 5.759943ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.78925ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 5.849986ms) Mar 9 00:31:05.139: INFO: (15) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 5.95306ms) Mar 9 00:31:05.142: INFO: (16) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 2.598528ms) Mar 9 00:31:05.142: INFO: (16) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 2.676315ms) Mar 9 00:31:05.143: INFO: (16) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 3.86748ms) Mar 9 00:31:05.143: INFO: (16) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 3.873816ms) Mar 9 00:31:05.143: INFO: (16) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 3.8603ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.497483ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:162/proxy/: bar (200; 4.55609ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 4.638556ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 4.64831ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 4.708553ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 4.707954ms) Mar 9 00:31:05.144: INFO: (16) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 9.107753ms) Mar 9 00:31:05.154: INFO: (17) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:460/proxy/: tls baz (200; 9.179815ms) Mar 9 00:31:05.154: INFO: (17) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 9.156359ms) Mar 9 00:31:05.154: INFO: (17) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 9.231553ms) Mar 9 00:31:05.154: INFO: (17) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 9.256544ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 22.008141ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 22.003049ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 22.088989ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 22.247144ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 22.156748ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 22.216572ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 22.196338ms) Mar 9 00:31:05.167: INFO: (17) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test<... (200; 6.343307ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm/proxy/: test (200; 6.387012ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:1080/proxy/: ... (200; 6.56828ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 6.485554ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 6.473743ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:462/proxy/: tls qux (200; 6.54344ms) Mar 9 00:31:05.174: INFO: (18) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: test (200; 4.039931ms) Mar 9 00:31:05.180: INFO: (19) /api/v1/namespaces/proxy-7205/pods/http:proxy-service-n5v7s-jnthm:160/proxy/: foo (200; 4.099976ms) Mar 9 00:31:05.180: INFO: (19) /api/v1/namespaces/proxy-7205/pods/https:proxy-service-n5v7s-jnthm:443/proxy/: ... (200; 4.106473ms) Mar 9 00:31:05.180: INFO: (19) /api/v1/namespaces/proxy-7205/pods/proxy-service-n5v7s-jnthm:1080/proxy/: test<... (200; 4.187608ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname2/proxy/: bar (200; 4.626452ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/proxy-service-n5v7s:portname1/proxy/: foo (200; 5.030773ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname2/proxy/: tls qux (200; 5.121402ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname2/proxy/: bar (200; 5.156306ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/http:proxy-service-n5v7s:portname1/proxy/: foo (200; 5.216351ms) Mar 9 00:31:05.181: INFO: (19) /api/v1/namespaces/proxy-7205/services/https:proxy-service-n5v7s:tlsportname1/proxy/: tls baz (200; 5.141862ms) STEP: deleting ReplicationController proxy-service-n5v7s in namespace proxy-7205, will wait for the garbage collector to delete the pods Mar 9 00:31:05.237: INFO: Deleting ReplicationController proxy-service-n5v7s took: 3.856073ms Mar 9 00:31:05.538: INFO: Terminating ReplicationController proxy-service-n5v7s pods took: 300.29453ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:31:12.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7205" for this suite. • [SLOW TEST:11.733 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":240,"skipped":3993,"failed":0} [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:31:12.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9240.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9240.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9240.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9240.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:31:16.662: INFO: DNS probes using dns-9240/dns-test-4292a121-8ab0-4baf-9d12-66fbde9c732b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:31:16.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9240" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":241,"skipped":3993,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:31:16.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-230" for this suite. • [SLOW TEST:60.117 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3999,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:16.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 9 00:32:21.061: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 00:32:21.076: INFO: Pod pod-with-prestop-exec-hook still exists Mar 9 00:32:23.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 00:32:23.081: INFO: Pod pod-with-prestop-exec-hook still exists Mar 9 00:32:25.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 00:32:25.079: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:25.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4080" for this suite. • [SLOW TEST:8.213 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":4005,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:25.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:32:25.192: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:26.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8535" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":244,"skipped":4008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:26.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:28.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6369" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":4065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:28.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7248 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7248 I0309 00:32:28.573008 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7248, replica count: 2 I0309 00:32:31.623459 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 00:32:31.623: INFO: Creating new exec pod Mar 9 00:32:34.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodm9g66 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 9 00:32:36.348: INFO: stderr: "I0309 00:32:36.242105 3319 log.go:172] (0xc000c78000) (0xc000d60000) Create stream\nI0309 00:32:36.242174 3319 log.go:172] (0xc000c78000) (0xc000d60000) Stream added, broadcasting: 1\nI0309 00:32:36.244950 3319 log.go:172] (0xc000c78000) Reply frame received for 1\nI0309 00:32:36.244985 3319 log.go:172] (0xc000c78000) (0xc000ed4000) Create stream\nI0309 00:32:36.244996 3319 log.go:172] (0xc000c78000) (0xc000ed4000) Stream added, broadcasting: 3\nI0309 00:32:36.245907 3319 log.go:172] (0xc000c78000) Reply frame received for 3\nI0309 00:32:36.245950 3319 log.go:172] (0xc000c78000) (0xc000ed40a0) Create stream\nI0309 00:32:36.245970 3319 log.go:172] (0xc000c78000) (0xc000ed40a0) Stream added, broadcasting: 5\nI0309 00:32:36.247681 3319 log.go:172] (0xc000c78000) Reply frame received for 5\nI0309 00:32:36.339124 3319 log.go:172] (0xc000c78000) Data frame received for 5\nI0309 00:32:36.339162 3319 log.go:172] (0xc000ed40a0) (5) Data frame handling\nI0309 00:32:36.339173 3319 log.go:172] (0xc000ed40a0) (5) Data frame sent\nI0309 00:32:36.339181 3319 log.go:172] (0xc000c78000) Data frame received for 5\nI0309 00:32:36.339187 3319 log.go:172] (0xc000ed40a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0309 00:32:36.339211 3319 log.go:172] (0xc000ed40a0) (5) Data frame sent\nI0309 00:32:36.339218 3319 log.go:172] (0xc000c78000) Data frame received for 5\nI0309 00:32:36.339225 3319 log.go:172] (0xc000ed40a0) (5) Data frame handling\nI0309 00:32:36.339254 3319 log.go:172] (0xc000c78000) Data frame received for 3\nI0309 00:32:36.339280 3319 log.go:172] (0xc000ed4000) (3) Data frame handling\nI0309 00:32:36.340239 3319 log.go:172] (0xc000c78000) Data frame received for 1\nI0309 00:32:36.340254 3319 log.go:172] (0xc000d60000) (1) Data frame handling\nI0309 00:32:36.340272 3319 log.go:172] (0xc000d60000) (1) Data frame sent\nI0309 00:32:36.340317 3319 log.go:172] (0xc000c78000) (0xc000d60000) Stream removed, broadcasting: 1\nI0309 00:32:36.340340 3319 log.go:172] (0xc000c78000) Go away received\nI0309 00:32:36.340558 3319 log.go:172] (0xc000c78000) (0xc000d60000) Stream removed, broadcasting: 1\nI0309 00:32:36.340570 3319 log.go:172] (0xc000c78000) (0xc000ed4000) Stream removed, broadcasting: 3\nI0309 00:32:36.340576 3319 log.go:172] (0xc000c78000) (0xc000ed40a0) Stream removed, broadcasting: 5\n" Mar 9 00:32:36.348: INFO: stdout: "" Mar 9 00:32:36.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodm9g66 -- /bin/sh -x -c nc -zv -t -w 2 10.96.242.159 80' Mar 9 00:32:36.520: INFO: stderr: "I0309 00:32:36.468754 3353 log.go:172] (0xc00099d340) (0xc00097e640) Create stream\nI0309 00:32:36.468816 3353 log.go:172] (0xc00099d340) (0xc00097e640) Stream added, broadcasting: 1\nI0309 00:32:36.472390 3353 log.go:172] (0xc00099d340) Reply frame received for 1\nI0309 00:32:36.472422 3353 log.go:172] (0xc00099d340) (0xc0005ce8c0) Create stream\nI0309 00:32:36.472429 3353 log.go:172] (0xc00099d340) (0xc0005ce8c0) Stream added, broadcasting: 3\nI0309 00:32:36.473130 3353 log.go:172] (0xc00099d340) Reply frame received for 3\nI0309 00:32:36.473161 3353 log.go:172] (0xc00099d340) (0xc0003af540) Create stream\nI0309 00:32:36.473172 3353 log.go:172] (0xc00099d340) (0xc0003af540) Stream added, broadcasting: 5\nI0309 00:32:36.474375 3353 log.go:172] (0xc00099d340) Reply frame received for 5\nI0309 00:32:36.516094 3353 log.go:172] (0xc00099d340) Data frame received for 5\nI0309 00:32:36.516114 3353 log.go:172] (0xc0003af540) (5) Data frame handling\nI0309 00:32:36.516129 3353 log.go:172] (0xc0003af540) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.242.159 80\nConnection to 10.96.242.159 80 port [tcp/http] succeeded!\nI0309 00:32:36.516206 3353 log.go:172] (0xc00099d340) Data frame received for 5\nI0309 00:32:36.516222 3353 log.go:172] (0xc0003af540) (5) Data frame handling\nI0309 00:32:36.516335 3353 log.go:172] (0xc00099d340) Data frame received for 3\nI0309 00:32:36.516349 3353 log.go:172] (0xc0005ce8c0) (3) Data frame handling\nI0309 00:32:36.517505 3353 log.go:172] (0xc00099d340) Data frame received for 1\nI0309 00:32:36.517525 3353 log.go:172] (0xc00097e640) (1) Data frame handling\nI0309 00:32:36.517538 3353 log.go:172] (0xc00097e640) (1) Data frame sent\nI0309 00:32:36.517552 3353 log.go:172] (0xc00099d340) (0xc00097e640) Stream removed, broadcasting: 1\nI0309 00:32:36.517561 3353 log.go:172] (0xc00099d340) Go away received\nI0309 00:32:36.517896 3353 log.go:172] (0xc00099d340) (0xc00097e640) Stream removed, broadcasting: 1\nI0309 00:32:36.517910 3353 log.go:172] (0xc00099d340) (0xc0005ce8c0) Stream removed, broadcasting: 3\nI0309 00:32:36.517916 3353 log.go:172] (0xc00099d340) (0xc0003af540) Stream removed, broadcasting: 5\n" Mar 9 00:32:36.520: INFO: stdout: "" Mar 9 00:32:36.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodm9g66 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31145' Mar 9 00:32:36.698: INFO: stderr: "I0309 00:32:36.624678 3376 log.go:172] (0xc00093c630) (0xc0009de000) Create stream\nI0309 00:32:36.624722 3376 log.go:172] (0xc00093c630) (0xc0009de000) Stream added, broadcasting: 1\nI0309 00:32:36.627696 3376 log.go:172] (0xc00093c630) Reply frame received for 1\nI0309 00:32:36.627729 3376 log.go:172] (0xc00093c630) (0xc00062dae0) Create stream\nI0309 00:32:36.627738 3376 log.go:172] (0xc00093c630) (0xc00062dae0) Stream added, broadcasting: 3\nI0309 00:32:36.628503 3376 log.go:172] (0xc00093c630) Reply frame received for 3\nI0309 00:32:36.628529 3376 log.go:172] (0xc00093c630) (0xc00062dcc0) Create stream\nI0309 00:32:36.628539 3376 log.go:172] (0xc00093c630) (0xc00062dcc0) Stream added, broadcasting: 5\nI0309 00:32:36.629917 3376 log.go:172] (0xc00093c630) Reply frame received for 5\nI0309 00:32:36.692793 3376 log.go:172] (0xc00093c630) Data frame received for 3\nI0309 00:32:36.692836 3376 log.go:172] (0xc00062dae0) (3) Data frame handling\nI0309 00:32:36.692884 3376 log.go:172] (0xc00093c630) Data frame received for 5\nI0309 00:32:36.692897 3376 log.go:172] (0xc00062dcc0) (5) Data frame handling\nI0309 00:32:36.692911 3376 log.go:172] (0xc00062dcc0) (5) Data frame sent\nI0309 00:32:36.692924 3376 log.go:172] (0xc00093c630) Data frame received for 5\nI0309 00:32:36.692938 3376 log.go:172] (0xc00062dcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.16 31145\nConnection to 172.17.0.16 31145 port [tcp/31145] succeeded!\nI0309 00:32:36.694540 3376 log.go:172] (0xc00093c630) Data frame received for 1\nI0309 00:32:36.694559 3376 log.go:172] (0xc0009de000) (1) Data frame handling\nI0309 00:32:36.694581 3376 log.go:172] (0xc0009de000) (1) Data frame sent\nI0309 00:32:36.694600 3376 log.go:172] (0xc00093c630) (0xc0009de000) Stream removed, broadcasting: 1\nI0309 00:32:36.694666 3376 log.go:172] (0xc00093c630) Go away received\nI0309 00:32:36.694903 3376 log.go:172] (0xc00093c630) (0xc0009de000) Stream removed, broadcasting: 1\nI0309 00:32:36.694919 3376 log.go:172] (0xc00093c630) (0xc00062dae0) Stream removed, broadcasting: 3\nI0309 00:32:36.694931 3376 log.go:172] (0xc00093c630) (0xc00062dcc0) Stream removed, broadcasting: 5\n" Mar 9 00:32:36.698: INFO: stdout: "" Mar 9 00:32:36.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodm9g66 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31145' Mar 9 00:32:36.886: INFO: stderr: "I0309 00:32:36.819893 3399 log.go:172] (0xc0000e94a0) (0xc000ae00a0) Create stream\nI0309 00:32:36.819940 3399 log.go:172] (0xc0000e94a0) (0xc000ae00a0) Stream added, broadcasting: 1\nI0309 00:32:36.822029 3399 log.go:172] (0xc0000e94a0) Reply frame received for 1\nI0309 00:32:36.822069 3399 log.go:172] (0xc0000e94a0) (0xc00059c640) Create stream\nI0309 00:32:36.822079 3399 log.go:172] (0xc0000e94a0) (0xc00059c640) Stream added, broadcasting: 3\nI0309 00:32:36.822821 3399 log.go:172] (0xc0000e94a0) Reply frame received for 3\nI0309 00:32:36.822854 3399 log.go:172] (0xc0000e94a0) (0xc000ae0140) Create stream\nI0309 00:32:36.822863 3399 log.go:172] (0xc0000e94a0) (0xc000ae0140) Stream added, broadcasting: 5\nI0309 00:32:36.823736 3399 log.go:172] (0xc0000e94a0) Reply frame received for 5\nI0309 00:32:36.880914 3399 log.go:172] (0xc0000e94a0) Data frame received for 5\nI0309 00:32:36.880937 3399 log.go:172] (0xc000ae0140) (5) Data frame handling\nI0309 00:32:36.880952 3399 log.go:172] (0xc000ae0140) (5) Data frame sent\nI0309 00:32:36.880959 3399 log.go:172] (0xc0000e94a0) Data frame received for 5\nI0309 00:32:36.880965 3399 log.go:172] (0xc000ae0140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31145\nConnection to 172.17.0.18 31145 port [tcp/31145] succeeded!\nI0309 00:32:36.881566 3399 log.go:172] (0xc0000e94a0) Data frame received for 3\nI0309 00:32:36.881591 3399 log.go:172] (0xc00059c640) (3) Data frame handling\nI0309 00:32:36.882743 3399 log.go:172] (0xc0000e94a0) Data frame received for 1\nI0309 00:32:36.882767 3399 log.go:172] (0xc000ae00a0) (1) Data frame handling\nI0309 00:32:36.882782 3399 log.go:172] (0xc000ae00a0) (1) Data frame sent\nI0309 00:32:36.882797 3399 log.go:172] (0xc0000e94a0) (0xc000ae00a0) Stream removed, broadcasting: 1\nI0309 00:32:36.883078 3399 log.go:172] (0xc0000e94a0) (0xc000ae00a0) Stream removed, broadcasting: 1\nI0309 00:32:36.883095 3399 log.go:172] (0xc0000e94a0) (0xc00059c640) Stream removed, broadcasting: 3\nI0309 00:32:36.883103 3399 log.go:172] (0xc0000e94a0) (0xc000ae0140) Stream removed, broadcasting: 5\n" Mar 9 00:32:36.886: INFO: stdout: "" Mar 9 00:32:36.886: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:36.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7248" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:8.544 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":246,"skipped":4089,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:36.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:32:53.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9866" for this suite. • [SLOW TEST:16.207 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":247,"skipped":4130,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:32:53.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 9 00:33:01.241: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.241: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.280208 7 log.go:172] (0xc001dea6e0) (0xc000f97a40) Create stream I0309 00:33:01.280248 7 log.go:172] (0xc001dea6e0) (0xc000f97a40) Stream added, broadcasting: 1 I0309 00:33:01.282576 7 log.go:172] (0xc001dea6e0) Reply frame received for 1 I0309 00:33:01.282638 7 log.go:172] (0xc001dea6e0) (0xc000fce140) Create stream I0309 00:33:01.282655 7 log.go:172] (0xc001dea6e0) (0xc000fce140) Stream added, broadcasting: 3 I0309 00:33:01.283572 7 log.go:172] (0xc001dea6e0) Reply frame received for 3 I0309 00:33:01.283617 7 log.go:172] (0xc001dea6e0) (0xc000c56000) Create stream I0309 00:33:01.283634 7 log.go:172] (0xc001dea6e0) (0xc000c56000) Stream added, broadcasting: 5 I0309 00:33:01.284575 7 log.go:172] (0xc001dea6e0) Reply frame received for 5 I0309 00:33:01.349337 7 log.go:172] (0xc001dea6e0) Data frame received for 5 I0309 00:33:01.349357 7 log.go:172] (0xc000c56000) (5) Data frame handling I0309 00:33:01.349376 7 log.go:172] (0xc001dea6e0) Data frame received for 3 I0309 00:33:01.349383 7 log.go:172] (0xc000fce140) (3) Data frame handling I0309 00:33:01.349407 7 log.go:172] (0xc000fce140) (3) Data frame sent I0309 00:33:01.349416 7 log.go:172] (0xc001dea6e0) Data frame received for 3 I0309 00:33:01.349422 7 log.go:172] (0xc000fce140) (3) Data frame handling I0309 00:33:01.350867 7 log.go:172] (0xc001dea6e0) Data frame received for 1 I0309 00:33:01.350898 7 log.go:172] (0xc000f97a40) (1) Data frame handling I0309 00:33:01.350909 7 log.go:172] (0xc000f97a40) (1) Data frame sent I0309 00:33:01.350927 7 log.go:172] (0xc001dea6e0) (0xc000f97a40) Stream removed, broadcasting: 1 I0309 00:33:01.350948 7 log.go:172] (0xc001dea6e0) Go away received I0309 00:33:01.351050 7 log.go:172] (0xc001dea6e0) (0xc000f97a40) Stream removed, broadcasting: 1 I0309 00:33:01.351070 7 log.go:172] (0xc001dea6e0) (0xc000fce140) Stream removed, broadcasting: 3 I0309 00:33:01.351078 7 log.go:172] (0xc001dea6e0) (0xc000c56000) Stream removed, broadcasting: 5 Mar 9 00:33:01.351: INFO: Exec stderr: "" Mar 9 00:33:01.351: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.351: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.379066 7 log.go:172] (0xc0020e2840) (0xc002893c20) Create stream I0309 00:33:01.379099 7 log.go:172] (0xc0020e2840) (0xc002893c20) Stream added, broadcasting: 1 I0309 00:33:01.381274 7 log.go:172] (0xc0020e2840) Reply frame received for 1 I0309 00:33:01.381305 7 log.go:172] (0xc0020e2840) (0xc002893d60) Create stream I0309 00:33:01.381317 7 log.go:172] (0xc0020e2840) (0xc002893d60) Stream added, broadcasting: 3 I0309 00:33:01.382178 7 log.go:172] (0xc0020e2840) Reply frame received for 3 I0309 00:33:01.382215 7 log.go:172] (0xc0020e2840) (0xc000f97ae0) Create stream I0309 00:33:01.382224 7 log.go:172] (0xc0020e2840) (0xc000f97ae0) Stream added, broadcasting: 5 I0309 00:33:01.383024 7 log.go:172] (0xc0020e2840) Reply frame received for 5 I0309 00:33:01.446217 7 log.go:172] (0xc0020e2840) Data frame received for 5 I0309 00:33:01.446259 7 log.go:172] (0xc000f97ae0) (5) Data frame handling I0309 00:33:01.446288 7 log.go:172] (0xc0020e2840) Data frame received for 3 I0309 00:33:01.446302 7 log.go:172] (0xc002893d60) (3) Data frame handling I0309 00:33:01.446330 7 log.go:172] (0xc002893d60) (3) Data frame sent I0309 00:33:01.446341 7 log.go:172] (0xc0020e2840) Data frame received for 3 I0309 00:33:01.446350 7 log.go:172] (0xc002893d60) (3) Data frame handling I0309 00:33:01.447424 7 log.go:172] (0xc0020e2840) Data frame received for 1 I0309 00:33:01.447445 7 log.go:172] (0xc002893c20) (1) Data frame handling I0309 00:33:01.447491 7 log.go:172] (0xc002893c20) (1) Data frame sent I0309 00:33:01.447514 7 log.go:172] (0xc0020e2840) (0xc002893c20) Stream removed, broadcasting: 1 I0309 00:33:01.447523 7 log.go:172] (0xc0020e2840) Go away received I0309 00:33:01.447612 7 log.go:172] (0xc0020e2840) (0xc002893c20) Stream removed, broadcasting: 1 I0309 00:33:01.447632 7 log.go:172] (0xc0020e2840) (0xc002893d60) Stream removed, broadcasting: 3 I0309 00:33:01.447656 7 log.go:172] (0xc0020e2840) (0xc000f97ae0) Stream removed, broadcasting: 5 Mar 9 00:33:01.447: INFO: Exec stderr: "" Mar 9 00:33:01.447: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.447: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.479276 7 log.go:172] (0xc001deadc0) (0xc000f97d60) Create stream I0309 00:33:01.479312 7 log.go:172] (0xc001deadc0) (0xc000f97d60) Stream added, broadcasting: 1 I0309 00:33:01.481577 7 log.go:172] (0xc001deadc0) Reply frame received for 1 I0309 00:33:01.481613 7 log.go:172] (0xc001deadc0) (0xc000b7a000) Create stream I0309 00:33:01.481632 7 log.go:172] (0xc001deadc0) (0xc000b7a000) Stream added, broadcasting: 3 I0309 00:33:01.482693 7 log.go:172] (0xc001deadc0) Reply frame received for 3 I0309 00:33:01.482725 7 log.go:172] (0xc001deadc0) (0xc002893e00) Create stream I0309 00:33:01.482738 7 log.go:172] (0xc001deadc0) (0xc002893e00) Stream added, broadcasting: 5 I0309 00:33:01.483797 7 log.go:172] (0xc001deadc0) Reply frame received for 5 I0309 00:33:01.544351 7 log.go:172] (0xc001deadc0) Data frame received for 5 I0309 00:33:01.544401 7 log.go:172] (0xc002893e00) (5) Data frame handling I0309 00:33:01.544435 7 log.go:172] (0xc001deadc0) Data frame received for 3 I0309 00:33:01.544453 7 log.go:172] (0xc000b7a000) (3) Data frame handling I0309 00:33:01.544490 7 log.go:172] (0xc000b7a000) (3) Data frame sent I0309 00:33:01.544502 7 log.go:172] (0xc001deadc0) Data frame received for 3 I0309 00:33:01.544508 7 log.go:172] (0xc000b7a000) (3) Data frame handling I0309 00:33:01.545963 7 log.go:172] (0xc001deadc0) Data frame received for 1 I0309 00:33:01.545987 7 log.go:172] (0xc000f97d60) (1) Data frame handling I0309 00:33:01.546006 7 log.go:172] (0xc000f97d60) (1) Data frame sent I0309 00:33:01.546020 7 log.go:172] (0xc001deadc0) (0xc000f97d60) Stream removed, broadcasting: 1 I0309 00:33:01.546039 7 log.go:172] (0xc001deadc0) Go away received I0309 00:33:01.546201 7 log.go:172] (0xc001deadc0) (0xc000f97d60) Stream removed, broadcasting: 1 I0309 00:33:01.546229 7 log.go:172] (0xc001deadc0) (0xc000b7a000) Stream removed, broadcasting: 3 I0309 00:33:01.546244 7 log.go:172] (0xc001deadc0) (0xc002893e00) Stream removed, broadcasting: 5 Mar 9 00:33:01.546: INFO: Exec stderr: "" Mar 9 00:33:01.546: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.546: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.568231 7 log.go:172] (0xc0023da580) (0xc000c563c0) Create stream I0309 00:33:01.568251 7 log.go:172] (0xc0023da580) (0xc000c563c0) Stream added, broadcasting: 1 I0309 00:33:01.570464 7 log.go:172] (0xc0023da580) Reply frame received for 1 I0309 00:33:01.570518 7 log.go:172] (0xc0023da580) (0xc0023e3b80) Create stream I0309 00:33:01.570540 7 log.go:172] (0xc0023da580) (0xc0023e3b80) Stream added, broadcasting: 3 I0309 00:33:01.571517 7 log.go:172] (0xc0023da580) Reply frame received for 3 I0309 00:33:01.571555 7 log.go:172] (0xc0023da580) (0xc000c56460) Create stream I0309 00:33:01.571567 7 log.go:172] (0xc0023da580) (0xc000c56460) Stream added, broadcasting: 5 I0309 00:33:01.572423 7 log.go:172] (0xc0023da580) Reply frame received for 5 I0309 00:33:01.631880 7 log.go:172] (0xc0023da580) Data frame received for 3 I0309 00:33:01.631929 7 log.go:172] (0xc0023da580) Data frame received for 5 I0309 00:33:01.631957 7 log.go:172] (0xc000c56460) (5) Data frame handling I0309 00:33:01.631986 7 log.go:172] (0xc0023e3b80) (3) Data frame handling I0309 00:33:01.632027 7 log.go:172] (0xc0023e3b80) (3) Data frame sent I0309 00:33:01.632040 7 log.go:172] (0xc0023da580) Data frame received for 3 I0309 00:33:01.632053 7 log.go:172] (0xc0023e3b80) (3) Data frame handling I0309 00:33:01.633177 7 log.go:172] (0xc0023da580) Data frame received for 1 I0309 00:33:01.633193 7 log.go:172] (0xc000c563c0) (1) Data frame handling I0309 00:33:01.633204 7 log.go:172] (0xc000c563c0) (1) Data frame sent I0309 00:33:01.633218 7 log.go:172] (0xc0023da580) (0xc000c563c0) Stream removed, broadcasting: 1 I0309 00:33:01.633293 7 log.go:172] (0xc0023da580) (0xc000c563c0) Stream removed, broadcasting: 1 I0309 00:33:01.633305 7 log.go:172] (0xc0023da580) (0xc0023e3b80) Stream removed, broadcasting: 3 I0309 00:33:01.633320 7 log.go:172] (0xc0023da580) Go away received I0309 00:33:01.633474 7 log.go:172] (0xc0023da580) (0xc000c56460) Stream removed, broadcasting: 5 Mar 9 00:33:01.633: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 9 00:33:01.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.633: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.659777 7 log.go:172] (0xc0023dabb0) (0xc000c56aa0) Create stream I0309 00:33:01.659805 7 log.go:172] (0xc0023dabb0) (0xc000c56aa0) Stream added, broadcasting: 1 I0309 00:33:01.661939 7 log.go:172] (0xc0023dabb0) Reply frame received for 1 I0309 00:33:01.661973 7 log.go:172] (0xc0023dabb0) (0xc000fce280) Create stream I0309 00:33:01.661982 7 log.go:172] (0xc0023dabb0) (0xc000fce280) Stream added, broadcasting: 3 I0309 00:33:01.662809 7 log.go:172] (0xc0023dabb0) Reply frame received for 3 I0309 00:33:01.662865 7 log.go:172] (0xc0023dabb0) (0xc0023e3c20) Create stream I0309 00:33:01.662887 7 log.go:172] (0xc0023dabb0) (0xc0023e3c20) Stream added, broadcasting: 5 I0309 00:33:01.663674 7 log.go:172] (0xc0023dabb0) Reply frame received for 5 I0309 00:33:01.721415 7 log.go:172] (0xc0023dabb0) Data frame received for 3 I0309 00:33:01.721443 7 log.go:172] (0xc000fce280) (3) Data frame handling I0309 00:33:01.721465 7 log.go:172] (0xc000fce280) (3) Data frame sent I0309 00:33:01.721480 7 log.go:172] (0xc0023dabb0) Data frame received for 3 I0309 00:33:01.721494 7 log.go:172] (0xc000fce280) (3) Data frame handling I0309 00:33:01.721519 7 log.go:172] (0xc0023dabb0) Data frame received for 5 I0309 00:33:01.721539 7 log.go:172] (0xc0023e3c20) (5) Data frame handling I0309 00:33:01.722987 7 log.go:172] (0xc0023dabb0) Data frame received for 1 I0309 00:33:01.723013 7 log.go:172] (0xc000c56aa0) (1) Data frame handling I0309 00:33:01.723028 7 log.go:172] (0xc000c56aa0) (1) Data frame sent I0309 00:33:01.723039 7 log.go:172] (0xc0023dabb0) (0xc000c56aa0) Stream removed, broadcasting: 1 I0309 00:33:01.723114 7 log.go:172] (0xc0023dabb0) (0xc000c56aa0) Stream removed, broadcasting: 1 I0309 00:33:01.723140 7 log.go:172] (0xc0023dabb0) (0xc000fce280) Stream removed, broadcasting: 3 I0309 00:33:01.723226 7 log.go:172] (0xc0023dabb0) Go away received I0309 00:33:01.723343 7 log.go:172] (0xc0023dabb0) (0xc0023e3c20) Stream removed, broadcasting: 5 Mar 9 00:33:01.723: INFO: Exec stderr: "" Mar 9 00:33:01.723: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.723: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.748554 7 log.go:172] (0xc003cd1290) (0xc0023e3f40) Create stream I0309 00:33:01.748583 7 log.go:172] (0xc003cd1290) (0xc0023e3f40) Stream added, broadcasting: 1 I0309 00:33:01.750208 7 log.go:172] (0xc003cd1290) Reply frame received for 1 I0309 00:33:01.750238 7 log.go:172] (0xc003cd1290) (0xc002893f40) Create stream I0309 00:33:01.750245 7 log.go:172] (0xc003cd1290) (0xc002893f40) Stream added, broadcasting: 3 I0309 00:33:01.750875 7 log.go:172] (0xc003cd1290) Reply frame received for 3 I0309 00:33:01.750901 7 log.go:172] (0xc003cd1290) (0xc000fce3c0) Create stream I0309 00:33:01.750916 7 log.go:172] (0xc003cd1290) (0xc000fce3c0) Stream added, broadcasting: 5 I0309 00:33:01.751609 7 log.go:172] (0xc003cd1290) Reply frame received for 5 I0309 00:33:01.804885 7 log.go:172] (0xc003cd1290) Data frame received for 5 I0309 00:33:01.804911 7 log.go:172] (0xc000fce3c0) (5) Data frame handling I0309 00:33:01.804927 7 log.go:172] (0xc003cd1290) Data frame received for 3 I0309 00:33:01.804934 7 log.go:172] (0xc002893f40) (3) Data frame handling I0309 00:33:01.804942 7 log.go:172] (0xc002893f40) (3) Data frame sent I0309 00:33:01.804951 7 log.go:172] (0xc003cd1290) Data frame received for 3 I0309 00:33:01.804961 7 log.go:172] (0xc002893f40) (3) Data frame handling I0309 00:33:01.806447 7 log.go:172] (0xc003cd1290) Data frame received for 1 I0309 00:33:01.806482 7 log.go:172] (0xc0023e3f40) (1) Data frame handling I0309 00:33:01.806506 7 log.go:172] (0xc0023e3f40) (1) Data frame sent I0309 00:33:01.806520 7 log.go:172] (0xc003cd1290) (0xc0023e3f40) Stream removed, broadcasting: 1 I0309 00:33:01.806534 7 log.go:172] (0xc003cd1290) Go away received I0309 00:33:01.806667 7 log.go:172] (0xc003cd1290) (0xc0023e3f40) Stream removed, broadcasting: 1 I0309 00:33:01.806687 7 log.go:172] (0xc003cd1290) (0xc002893f40) Stream removed, broadcasting: 3 I0309 00:33:01.806703 7 log.go:172] (0xc003cd1290) (0xc000fce3c0) Stream removed, broadcasting: 5 Mar 9 00:33:01.806: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 9 00:33:01.806: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.806: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.833853 7 log.go:172] (0xc0020fb810) (0xc000fcedc0) Create stream I0309 00:33:01.833879 7 log.go:172] (0xc0020fb810) (0xc000fcedc0) Stream added, broadcasting: 1 I0309 00:33:01.836123 7 log.go:172] (0xc0020fb810) Reply frame received for 1 I0309 00:33:01.836174 7 log.go:172] (0xc0020fb810) (0xc000fcef00) Create stream I0309 00:33:01.836188 7 log.go:172] (0xc0020fb810) (0xc000fcef00) Stream added, broadcasting: 3 I0309 00:33:01.837349 7 log.go:172] (0xc0020fb810) Reply frame received for 3 I0309 00:33:01.837392 7 log.go:172] (0xc0020fb810) (0xc000b741e0) Create stream I0309 00:33:01.837408 7 log.go:172] (0xc0020fb810) (0xc000b741e0) Stream added, broadcasting: 5 I0309 00:33:01.838459 7 log.go:172] (0xc0020fb810) Reply frame received for 5 I0309 00:33:01.900632 7 log.go:172] (0xc0020fb810) Data frame received for 5 I0309 00:33:01.900665 7 log.go:172] (0xc000b741e0) (5) Data frame handling I0309 00:33:01.900695 7 log.go:172] (0xc0020fb810) Data frame received for 3 I0309 00:33:01.900721 7 log.go:172] (0xc000fcef00) (3) Data frame handling I0309 00:33:01.900764 7 log.go:172] (0xc000fcef00) (3) Data frame sent I0309 00:33:01.900784 7 log.go:172] (0xc0020fb810) Data frame received for 3 I0309 00:33:01.900795 7 log.go:172] (0xc000fcef00) (3) Data frame handling I0309 00:33:01.902156 7 log.go:172] (0xc0020fb810) Data frame received for 1 I0309 00:33:01.902185 7 log.go:172] (0xc000fcedc0) (1) Data frame handling I0309 00:33:01.902206 7 log.go:172] (0xc000fcedc0) (1) Data frame sent I0309 00:33:01.902226 7 log.go:172] (0xc0020fb810) (0xc000fcedc0) Stream removed, broadcasting: 1 I0309 00:33:01.902361 7 log.go:172] (0xc0020fb810) Go away received I0309 00:33:01.902386 7 log.go:172] (0xc0020fb810) (0xc000fcedc0) Stream removed, broadcasting: 1 I0309 00:33:01.902410 7 log.go:172] (0xc0020fb810) (0xc000fcef00) Stream removed, broadcasting: 3 I0309 00:33:01.902420 7 log.go:172] (0xc0020fb810) (0xc000b741e0) Stream removed, broadcasting: 5 Mar 9 00:33:01.902: INFO: Exec stderr: "" Mar 9 00:33:01.902: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.902: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:01.929371 7 log.go:172] (0xc003cd18c0) (0xc000b74dc0) Create stream I0309 00:33:01.929390 7 log.go:172] (0xc003cd18c0) (0xc000b74dc0) Stream added, broadcasting: 1 I0309 00:33:01.931267 7 log.go:172] (0xc003cd18c0) Reply frame received for 1 I0309 00:33:01.931303 7 log.go:172] (0xc003cd18c0) (0xc000b74f00) Create stream I0309 00:33:01.931321 7 log.go:172] (0xc003cd18c0) (0xc000b74f00) Stream added, broadcasting: 3 I0309 00:33:01.932081 7 log.go:172] (0xc003cd18c0) Reply frame received for 3 I0309 00:33:01.932111 7 log.go:172] (0xc003cd18c0) (0xc000b752c0) Create stream I0309 00:33:01.932121 7 log.go:172] (0xc003cd18c0) (0xc000b752c0) Stream added, broadcasting: 5 I0309 00:33:01.932946 7 log.go:172] (0xc003cd18c0) Reply frame received for 5 I0309 00:33:01.997448 7 log.go:172] (0xc003cd18c0) Data frame received for 3 I0309 00:33:01.997482 7 log.go:172] (0xc000b74f00) (3) Data frame handling I0309 00:33:01.997503 7 log.go:172] (0xc000b74f00) (3) Data frame sent I0309 00:33:01.997518 7 log.go:172] (0xc003cd18c0) Data frame received for 3 I0309 00:33:01.997531 7 log.go:172] (0xc000b74f00) (3) Data frame handling I0309 00:33:01.997730 7 log.go:172] (0xc003cd18c0) Data frame received for 5 I0309 00:33:01.997741 7 log.go:172] (0xc000b752c0) (5) Data frame handling I0309 00:33:01.999155 7 log.go:172] (0xc003cd18c0) Data frame received for 1 I0309 00:33:01.999183 7 log.go:172] (0xc000b74dc0) (1) Data frame handling I0309 00:33:01.999194 7 log.go:172] (0xc000b74dc0) (1) Data frame sent I0309 00:33:01.999209 7 log.go:172] (0xc003cd18c0) (0xc000b74dc0) Stream removed, broadcasting: 1 I0309 00:33:01.999231 7 log.go:172] (0xc003cd18c0) Go away received I0309 00:33:01.999334 7 log.go:172] (0xc003cd18c0) (0xc000b74dc0) Stream removed, broadcasting: 1 I0309 00:33:01.999356 7 log.go:172] (0xc003cd18c0) (0xc000b74f00) Stream removed, broadcasting: 3 I0309 00:33:01.999368 7 log.go:172] (0xc003cd18c0) (0xc000b752c0) Stream removed, broadcasting: 5 Mar 9 00:33:01.999: INFO: Exec stderr: "" Mar 9 00:33:01.999: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:01.999: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:02.026641 7 log.go:172] (0xc0023db1e0) (0xc000c56d20) Create stream I0309 00:33:02.026664 7 log.go:172] (0xc0023db1e0) (0xc000c56d20) Stream added, broadcasting: 1 I0309 00:33:02.028417 7 log.go:172] (0xc0023db1e0) Reply frame received for 1 I0309 00:33:02.028442 7 log.go:172] (0xc0023db1e0) (0xc000b75680) Create stream I0309 00:33:02.028452 7 log.go:172] (0xc0023db1e0) (0xc000b75680) Stream added, broadcasting: 3 I0309 00:33:02.029160 7 log.go:172] (0xc0023db1e0) Reply frame received for 3 I0309 00:33:02.029198 7 log.go:172] (0xc0023db1e0) (0xc000fcf220) Create stream I0309 00:33:02.029211 7 log.go:172] (0xc0023db1e0) (0xc000fcf220) Stream added, broadcasting: 5 I0309 00:33:02.030044 7 log.go:172] (0xc0023db1e0) Reply frame received for 5 I0309 00:33:02.085255 7 log.go:172] (0xc0023db1e0) Data frame received for 5 I0309 00:33:02.085292 7 log.go:172] (0xc000fcf220) (5) Data frame handling I0309 00:33:02.085312 7 log.go:172] (0xc0023db1e0) Data frame received for 3 I0309 00:33:02.085322 7 log.go:172] (0xc000b75680) (3) Data frame handling I0309 00:33:02.085335 7 log.go:172] (0xc000b75680) (3) Data frame sent I0309 00:33:02.085592 7 log.go:172] (0xc0023db1e0) Data frame received for 3 I0309 00:33:02.085606 7 log.go:172] (0xc000b75680) (3) Data frame handling I0309 00:33:02.086483 7 log.go:172] (0xc0023db1e0) Data frame received for 1 I0309 00:33:02.086499 7 log.go:172] (0xc000c56d20) (1) Data frame handling I0309 00:33:02.086508 7 log.go:172] (0xc000c56d20) (1) Data frame sent I0309 00:33:02.086521 7 log.go:172] (0xc0023db1e0) (0xc000c56d20) Stream removed, broadcasting: 1 I0309 00:33:02.086531 7 log.go:172] (0xc0023db1e0) Go away received I0309 00:33:02.086681 7 log.go:172] (0xc0023db1e0) (0xc000c56d20) Stream removed, broadcasting: 1 I0309 00:33:02.086715 7 log.go:172] (0xc0023db1e0) (0xc000b75680) Stream removed, broadcasting: 3 I0309 00:33:02.086728 7 log.go:172] (0xc0023db1e0) (0xc000fcf220) Stream removed, broadcasting: 5 Mar 9 00:33:02.086: INFO: Exec stderr: "" Mar 9 00:33:02.086: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:33:02.086: INFO: >>> kubeConfig: /root/.kube/config I0309 00:33:02.111129 7 log.go:172] (0xc0020e2e70) (0xc000b2a8c0) Create stream I0309 00:33:02.111150 7 log.go:172] (0xc0020e2e70) (0xc000b2a8c0) Stream added, broadcasting: 1 I0309 00:33:02.112829 7 log.go:172] (0xc0020e2e70) Reply frame received for 1 I0309 00:33:02.112855 7 log.go:172] (0xc0020e2e70) (0xc000b2adc0) Create stream I0309 00:33:02.112865 7 log.go:172] (0xc0020e2e70) (0xc000b2adc0) Stream added, broadcasting: 3 I0309 00:33:02.113525 7 log.go:172] (0xc0020e2e70) Reply frame received for 3 I0309 00:33:02.113551 7 log.go:172] (0xc0020e2e70) (0xc000b75b80) Create stream I0309 00:33:02.113561 7 log.go:172] (0xc0020e2e70) (0xc000b75b80) Stream added, broadcasting: 5 I0309 00:33:02.114301 7 log.go:172] (0xc0020e2e70) Reply frame received for 5 I0309 00:33:02.169639 7 log.go:172] (0xc0020e2e70) Data frame received for 3 I0309 00:33:02.169664 7 log.go:172] (0xc000b2adc0) (3) Data frame handling I0309 00:33:02.169673 7 log.go:172] (0xc000b2adc0) (3) Data frame sent I0309 00:33:02.169678 7 log.go:172] (0xc0020e2e70) Data frame received for 3 I0309 00:33:02.169683 7 log.go:172] (0xc000b2adc0) (3) Data frame handling I0309 00:33:02.169704 7 log.go:172] (0xc0020e2e70) Data frame received for 5 I0309 00:33:02.169712 7 log.go:172] (0xc000b75b80) (5) Data frame handling I0309 00:33:02.171393 7 log.go:172] (0xc0020e2e70) Data frame received for 1 I0309 00:33:02.171436 7 log.go:172] (0xc000b2a8c0) (1) Data frame handling I0309 00:33:02.171473 7 log.go:172] (0xc000b2a8c0) (1) Data frame sent I0309 00:33:02.171495 7 log.go:172] (0xc0020e2e70) (0xc000b2a8c0) Stream removed, broadcasting: 1 I0309 00:33:02.171514 7 log.go:172] (0xc0020e2e70) Go away received I0309 00:33:02.171668 7 log.go:172] (0xc0020e2e70) (0xc000b2a8c0) Stream removed, broadcasting: 1 I0309 00:33:02.171697 7 log.go:172] (0xc0020e2e70) (0xc000b2adc0) Stream removed, broadcasting: 3 I0309 00:33:02.171719 7 log.go:172] (0xc0020e2e70) (0xc000b75b80) Stream removed, broadcasting: 5 Mar 9 00:33:02.171: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:33:02.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4024" for this suite. • [SLOW TEST:9.051 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":4132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:33:02.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:33:02.241: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 9 00:33:05.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 create -f -' Mar 9 00:33:06.996: INFO: stderr: "" Mar 9 00:33:06.996: INFO: stdout: "e2e-test-crd-publish-openapi-6543-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 9 00:33:06.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 delete e2e-test-crd-publish-openapi-6543-crds test-foo' Mar 9 00:33:07.113: INFO: stderr: "" Mar 9 00:33:07.113: INFO: stdout: "e2e-test-crd-publish-openapi-6543-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 9 00:33:07.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 apply -f -' Mar 9 00:33:07.371: INFO: stderr: "" Mar 9 00:33:07.371: INFO: stdout: "e2e-test-crd-publish-openapi-6543-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 9 00:33:07.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 delete e2e-test-crd-publish-openapi-6543-crds test-foo' Mar 9 00:33:07.474: INFO: stderr: "" Mar 9 00:33:07.474: INFO: stdout: "e2e-test-crd-publish-openapi-6543-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 9 00:33:07.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 create -f -' Mar 9 00:33:07.742: INFO: rc: 1 Mar 9 00:33:07.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 apply -f -' Mar 9 00:33:07.999: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 9 00:33:08.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 create -f -' Mar 9 00:33:08.224: INFO: rc: 1 Mar 9 00:33:08.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-504 apply -f -' Mar 9 00:33:08.447: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 9 00:33:08.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6543-crds' Mar 9 00:33:08.659: INFO: stderr: "" Mar 9 00:33:08.659: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6543-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 9 00:33:08.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6543-crds.metadata' Mar 9 00:33:08.891: INFO: stderr: "" Mar 9 00:33:08.891: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6543-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 9 00:33:08.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6543-crds.spec' Mar 9 00:33:09.172: INFO: stderr: "" Mar 9 00:33:09.172: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6543-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 9 00:33:09.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6543-crds.spec.bars' Mar 9 00:33:09.420: INFO: stderr: "" Mar 9 00:33:09.420: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6543-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 9 00:33:09.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6543-crds.spec.bars2' Mar 9 00:33:09.687: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:33:12.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-504" for this suite. • [SLOW TEST:10.355 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":249,"skipped":4220,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:33:12.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:33:12.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:33:18.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9361" for this suite. • [SLOW TEST:6.051 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":250,"skipped":4220,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:33:18.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:33:47.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8864" for this suite. STEP: Destroying namespace "nsdeletetest-8632" for this suite. Mar 9 00:33:47.833: INFO: Namespace nsdeletetest-8632 was already deleted STEP: Destroying namespace "nsdeletetest-9296" for this suite. • [SLOW TEST:29.250 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":251,"skipped":4221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:33:47.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 9 00:33:47.897: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150471 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:33:47.897: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150471 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 9 00:33:57.904: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150509 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:33:57.904: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150509 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 9 00:34:07.911: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150539 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:34:07.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150539 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 9 00:34:17.918: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150569 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:34:17.918: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-a 95972608-99cd-41dc-9a89-b41604dc312d 150569 0 2020-03-09 00:33:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 9 00:34:27.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-b c6b3664d-e7e8-4c34-a569-cef009889df7 150600 0 2020-03-09 00:34:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:34:27.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-b c6b3664d-e7e8-4c34-a569-cef009889df7 150600 0 2020-03-09 00:34:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 9 00:34:37.931: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-b c6b3664d-e7e8-4c34-a569-cef009889df7 150630 0 2020-03-09 00:34:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 9 00:34:37.931: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9654 /api/v1/namespaces/watch-9654/configmaps/e2e-watch-test-configmap-b c6b3664d-e7e8-4c34-a569-cef009889df7 150630 0 2020-03-09 00:34:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:34:47.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9654" for this suite. • [SLOW TEST:60.101 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":252,"skipped":4248,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:34:47.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9060 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-9060 Mar 9 00:34:48.032: INFO: Found 0 stateful pods, waiting for 1 Mar 9 00:34:58.036: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 9 00:34:58.071: INFO: Deleting all statefulset in ns statefulset-9060 Mar 9 00:34:58.105: INFO: Scaling statefulset ss to 0 Mar 9 00:35:18.138: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:35:18.140: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:35:18.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9060" for this suite. • [SLOW TEST:30.251 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":253,"skipped":4259,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:35:18.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:35:22.261: INFO: DNS probes using dns-test-72a50b13-b19a-44f2-b5e8-f0ca73f2e442 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:35:26.372: INFO: DNS probes using dns-test-9c973605-7615-4401-8942-1dfe57a79b4f succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2078.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2078.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:35:30.464: INFO: DNS probes using dns-test-6bf2b6d0-3310-4978-be56-b31ed18aa69b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:35:30.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2078" for this suite. • [SLOW TEST:12.360 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":254,"skipped":4261,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:35:30.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-388/configmap-test-a552d682-bd5f-47c9-8710-e548f1e3cc70 STEP: Creating a pod to test consume configMaps Mar 9 00:35:30.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e" in namespace "configmap-388" to be "success or failure" Mar 9 00:35:30.688: INFO: Pod "pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.529804ms Mar 9 00:35:32.692: INFO: Pod "pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e": Phase="Running", Reason="", readiness=true. Elapsed: 2.024146258s Mar 9 00:35:34.696: INFO: Pod "pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027785054s STEP: Saw pod success Mar 9 00:35:34.696: INFO: Pod "pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e" satisfied condition "success or failure" Mar 9 00:35:34.698: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e container env-test: STEP: delete the pod Mar 9 00:35:34.733: INFO: Waiting for pod pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e to disappear Mar 9 00:35:34.738: INFO: Pod pod-configmaps-cac85e45-12d6-45ce-bfc4-cf0dae18f46e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:35:34.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-388" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4273,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:35:34.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 9 00:35:34.901: INFO: >>> kubeConfig: /root/.kube/config Mar 9 00:35:36.770: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:35:46.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6078" for this suite. • [SLOW TEST:11.352 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":256,"skipped":4276,"failed":0} SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:35:46.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-8f06a82c-1084-4521-b098-0b3e60405a43 in namespace container-probe-5596 Mar 9 00:35:48.175: INFO: Started pod liveness-8f06a82c-1084-4521-b098-0b3e60405a43 in namespace container-probe-5596 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 00:35:48.178: INFO: Initial restart count of pod liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is 0 Mar 9 00:36:06.263: INFO: Restart count of pod container-probe-5596/liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is now 1 (18.085662158s elapsed) Mar 9 00:36:26.300: INFO: Restart count of pod container-probe-5596/liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is now 2 (38.122214181s elapsed) Mar 9 00:36:46.335: INFO: Restart count of pod container-probe-5596/liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is now 3 (58.157821297s elapsed) Mar 9 00:37:06.371: INFO: Restart count of pod container-probe-5596/liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is now 4 (1m18.193721706s elapsed) Mar 9 00:38:16.559: INFO: Restart count of pod container-probe-5596/liveness-8f06a82c-1084-4521-b098-0b3e60405a43 is now 5 (2m28.381302031s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:38:16.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5596" for this suite. • [SLOW TEST:150.505 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4278,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:38:16.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 9 00:38:16.673: INFO: Waiting up to 5m0s for pod "pod-325139a6-b4cf-495c-b8e9-6d076e15c09e" in namespace "emptydir-5945" to be "success or failure" Mar 9 00:38:16.676: INFO: Pod "pod-325139a6-b4cf-495c-b8e9-6d076e15c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.720754ms Mar 9 00:38:18.679: INFO: Pod "pod-325139a6-b4cf-495c-b8e9-6d076e15c09e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00654365s STEP: Saw pod success Mar 9 00:38:18.680: INFO: Pod "pod-325139a6-b4cf-495c-b8e9-6d076e15c09e" satisfied condition "success or failure" Mar 9 00:38:18.682: INFO: Trying to get logs from node latest-worker pod pod-325139a6-b4cf-495c-b8e9-6d076e15c09e container test-container: STEP: delete the pod Mar 9 00:38:18.715: INFO: Waiting for pod pod-325139a6-b4cf-495c-b8e9-6d076e15c09e to disappear Mar 9 00:38:18.718: INFO: Pod pod-325139a6-b4cf-495c-b8e9-6d076e15c09e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:38:18.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5945" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4286,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:38:18.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:38:29.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1405" for this suite. • [SLOW TEST:11.207 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":259,"skipped":4288,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:38:29.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-17771945-b158-4196-b822-229f2e5120de STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-17771945-b158-4196-b822-229f2e5120de STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:38:34.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3255" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":260,"skipped":4301,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:38:34.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0309 00:39:14.170743 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 00:39:14.170: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:39:14.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5113" for this suite. • [SLOW TEST:40.135 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":261,"skipped":4308,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:39:14.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 9 00:39:14.255: INFO: Waiting up to 5m0s for pod "downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b" in namespace "downward-api-586" to be "success or failure" Mar 9 00:39:14.258: INFO: Pod "downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906259ms Mar 9 00:39:16.261: INFO: Pod "downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005797592s STEP: Saw pod success Mar 9 00:39:16.261: INFO: Pod "downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b" satisfied condition "success or failure" Mar 9 00:39:16.266: INFO: Trying to get logs from node latest-worker pod downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b container dapi-container: STEP: delete the pod Mar 9 00:39:16.277: INFO: Waiting for pod downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b to disappear Mar 9 00:39:16.293: INFO: Pod downward-api-4db6dc11-ce30-4563-99f0-33d84b92097b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:39:16.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-586" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:39:16.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-f88a9e18-977c-4267-9cee-6bdc33a65bde STEP: Creating configMap with name cm-test-opt-upd-d7ef1317-c355-4f98-9037-f62fcfbb868f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f88a9e18-977c-4267-9cee-6bdc33a65bde STEP: Updating configmap cm-test-opt-upd-d7ef1317-c355-4f98-9037-f62fcfbb868f STEP: Creating configMap with name cm-test-opt-create-e4828d0a-bc46-42fa-9b61-79d3e2cab20c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:40:52.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4908" for this suite. • [SLOW TEST:96.676 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4365,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:40:52.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 00:40:55.062: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:40:55.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7249" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4380,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:40:55.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 9 00:40:55.164: INFO: Waiting up to 5m0s for pod "pod-d7edffed-9870-44be-891b-82e6e724f1e2" in namespace "emptydir-8647" to be "success or failure" Mar 9 00:40:55.169: INFO: Pod "pod-d7edffed-9870-44be-891b-82e6e724f1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.600992ms Mar 9 00:40:57.173: INFO: Pod "pod-d7edffed-9870-44be-891b-82e6e724f1e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00950451s STEP: Saw pod success Mar 9 00:40:57.173: INFO: Pod "pod-d7edffed-9870-44be-891b-82e6e724f1e2" satisfied condition "success or failure" Mar 9 00:40:57.176: INFO: Trying to get logs from node latest-worker2 pod pod-d7edffed-9870-44be-891b-82e6e724f1e2 container test-container: STEP: delete the pod Mar 9 00:40:57.208: INFO: Waiting for pod pod-d7edffed-9870-44be-891b-82e6e724f1e2 to disappear Mar 9 00:40:57.240: INFO: Pod pod-d7edffed-9870-44be-891b-82e6e724f1e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:40:57.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8647" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4393,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:40:57.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-70 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 00:40:57.309: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 9 00:40:57.326: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:40:59.330: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:41:01.329: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:03.329: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:05.330: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:07.330: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:09.329: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:11.329: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 9 00:41:11.334: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 9 00:41:13.338: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 9 00:41:17.361: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.226:8080/dial?request=hostname&protocol=udp&host=10.244.1.225&port=8081&tries=1'] Namespace:pod-network-test-70 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:41:17.361: INFO: >>> kubeConfig: /root/.kube/config I0309 00:41:17.393992 7 log.go:172] (0xc0023da580) (0xc002892c80) Create stream I0309 00:41:17.394025 7 log.go:172] (0xc0023da580) (0xc002892c80) Stream added, broadcasting: 1 I0309 00:41:17.396685 7 log.go:172] (0xc0023da580) Reply frame received for 1 I0309 00:41:17.396731 7 log.go:172] (0xc0023da580) (0xc00035b5e0) Create stream I0309 00:41:17.396747 7 log.go:172] (0xc0023da580) (0xc00035b5e0) Stream added, broadcasting: 3 I0309 00:41:17.397755 7 log.go:172] (0xc0023da580) Reply frame received for 3 I0309 00:41:17.397797 7 log.go:172] (0xc0023da580) (0xc002893040) Create stream I0309 00:41:17.397812 7 log.go:172] (0xc0023da580) (0xc002893040) Stream added, broadcasting: 5 I0309 00:41:17.399167 7 log.go:172] (0xc0023da580) Reply frame received for 5 I0309 00:41:17.466649 7 log.go:172] (0xc0023da580) Data frame received for 3 I0309 00:41:17.466683 7 log.go:172] (0xc00035b5e0) (3) Data frame handling I0309 00:41:17.466697 7 log.go:172] (0xc00035b5e0) (3) Data frame sent I0309 00:41:17.466786 7 log.go:172] (0xc0023da580) Data frame received for 3 I0309 00:41:17.466802 7 log.go:172] (0xc00035b5e0) (3) Data frame handling I0309 00:41:17.467028 7 log.go:172] (0xc0023da580) Data frame received for 5 I0309 00:41:17.467043 7 log.go:172] (0xc002893040) (5) Data frame handling I0309 00:41:17.469290 7 log.go:172] (0xc0023da580) Data frame received for 1 I0309 00:41:17.469316 7 log.go:172] (0xc002892c80) (1) Data frame handling I0309 00:41:17.469334 7 log.go:172] (0xc002892c80) (1) Data frame sent I0309 00:41:17.469351 7 log.go:172] (0xc0023da580) (0xc002892c80) Stream removed, broadcasting: 1 I0309 00:41:17.469370 7 log.go:172] (0xc0023da580) Go away received I0309 00:41:17.469516 7 log.go:172] (0xc0023da580) (0xc002892c80) Stream removed, broadcasting: 1 I0309 00:41:17.469536 7 log.go:172] (0xc0023da580) (0xc00035b5e0) Stream removed, broadcasting: 3 I0309 00:41:17.469546 7 log.go:172] (0xc0023da580) (0xc002893040) Stream removed, broadcasting: 5 Mar 9 00:41:17.469: INFO: Waiting for responses: map[] Mar 9 00:41:17.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.226:8080/dial?request=hostname&protocol=udp&host=10.244.2.147&port=8081&tries=1'] Namespace:pod-network-test-70 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:41:17.472: INFO: >>> kubeConfig: /root/.kube/config I0309 00:41:17.496670 7 log.go:172] (0xc0020e28f0) (0xc0016f9ea0) Create stream I0309 00:41:17.496691 7 log.go:172] (0xc0020e28f0) (0xc0016f9ea0) Stream added, broadcasting: 1 I0309 00:41:17.503787 7 log.go:172] (0xc0020e28f0) Reply frame received for 1 I0309 00:41:17.503824 7 log.go:172] (0xc0020e28f0) (0xc002a08000) Create stream I0309 00:41:17.503836 7 log.go:172] (0xc0020e28f0) (0xc002a08000) Stream added, broadcasting: 3 I0309 00:41:17.504632 7 log.go:172] (0xc0020e28f0) Reply frame received for 3 I0309 00:41:17.504656 7 log.go:172] (0xc0020e28f0) (0xc002a080a0) Create stream I0309 00:41:17.504661 7 log.go:172] (0xc0020e28f0) (0xc002a080a0) Stream added, broadcasting: 5 I0309 00:41:17.505337 7 log.go:172] (0xc0020e28f0) Reply frame received for 5 I0309 00:41:17.566496 7 log.go:172] (0xc0020e28f0) Data frame received for 3 I0309 00:41:17.566523 7 log.go:172] (0xc002a08000) (3) Data frame handling I0309 00:41:17.566543 7 log.go:172] (0xc002a08000) (3) Data frame sent I0309 00:41:17.567233 7 log.go:172] (0xc0020e28f0) Data frame received for 3 I0309 00:41:17.567272 7 log.go:172] (0xc002a08000) (3) Data frame handling I0309 00:41:17.567381 7 log.go:172] (0xc0020e28f0) Data frame received for 5 I0309 00:41:17.567396 7 log.go:172] (0xc002a080a0) (5) Data frame handling I0309 00:41:17.568446 7 log.go:172] (0xc0020e28f0) Data frame received for 1 I0309 00:41:17.568462 7 log.go:172] (0xc0016f9ea0) (1) Data frame handling I0309 00:41:17.568478 7 log.go:172] (0xc0016f9ea0) (1) Data frame sent I0309 00:41:17.568913 7 log.go:172] (0xc0020e28f0) (0xc0016f9ea0) Stream removed, broadcasting: 1 I0309 00:41:17.569004 7 log.go:172] (0xc0020e28f0) (0xc0016f9ea0) Stream removed, broadcasting: 1 I0309 00:41:17.569032 7 log.go:172] (0xc0020e28f0) (0xc002a08000) Stream removed, broadcasting: 3 I0309 00:41:17.569056 7 log.go:172] (0xc0020e28f0) (0xc002a080a0) Stream removed, broadcasting: 5 Mar 9 00:41:17.569: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:17.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0309 00:41:17.569425 7 log.go:172] (0xc0020e28f0) Go away received STEP: Destroying namespace "pod-network-test-70" for this suite. • [SLOW TEST:20.333 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":266,"skipped":4397,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:17.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 9 00:41:17.682: INFO: Creating deployment "test-recreate-deployment" Mar 9 00:41:17.718: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 9 00:41:17.727: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 9 00:41:19.789: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 9 00:41:19.792: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 9 00:41:19.799: INFO: Updating deployment test-recreate-deployment Mar 9 00:41:19.799: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 9 00:41:20.402: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3990 /apis/apps/v1/namespaces/deployment-3990/deployments/test-recreate-deployment 863296e9-34a3-46e6-ad2d-53d1f2969cdb 152549 2 2020-03-09 00:41:17 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3e8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-09 00:41:19 +0000 UTC,LastTransitionTime:2020-03-09 00:41:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-09 00:41:20 +0000 UTC,LastTransitionTime:2020-03-09 00:41:17 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 9 00:41:20.405: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3990 /apis/apps/v1/namespaces/deployment-3990/replicasets/test-recreate-deployment-5f94c574ff f4b87beb-8022-4df7-b23b-dfcab1a0a7b1 152544 1 2020-03-09 00:41:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 863296e9-34a3-46e6-ad2d-53d1f2969cdb 0xc003d3ec77 0xc003d3ec78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3ecd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 00:41:20.405: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 9 00:41:20.405: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3990 /apis/apps/v1/namespaces/deployment-3990/replicasets/test-recreate-deployment-799c574856 1fe75b14-9144-4a5c-8cc8-89179eb545da 152534 2 2020-03-09 00:41:17 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 863296e9-34a3-46e6-ad2d-53d1f2969cdb 0xc003d3ed47 0xc003d3ed48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3edb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 00:41:20.410: INFO: Pod "test-recreate-deployment-5f94c574ff-2l7nq" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2l7nq test-recreate-deployment-5f94c574ff- deployment-3990 /api/v1/namespaces/deployment-3990/pods/test-recreate-deployment-5f94c574ff-2l7nq edff01b8-1ef1-4e3a-ae3e-b923fd065751 152548 0 2020-03-09 00:41:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff f4b87beb-8022-4df7-b23b-dfcab1a0a7b1 0xc003d3f207 0xc003d3f208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fl2kd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fl2kd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fl2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 00:41:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 00:41:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 00:41:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 00:41:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-09 00:41:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:20.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3990" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":267,"skipped":4405,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:20.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:22.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-301" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":268,"skipped":4413,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 00:41:22.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5409' Mar 9 00:41:22.706: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 00:41:22.706: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740 Mar 9 00:41:24.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5409' Mar 9 00:41:24.870: INFO: stderr: "" Mar 9 00:41:24.870: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":280,"completed":269,"skipped":4418,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:24.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 9 00:41:24.942: INFO: Waiting up to 5m0s for pod "pod-5d0d4a13-f440-4317-8208-e87406b97bc9" in namespace "emptydir-9667" to be "success or failure" Mar 9 00:41:24.950: INFO: Pod "pod-5d0d4a13-f440-4317-8208-e87406b97bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.990308ms Mar 9 00:41:26.954: INFO: Pod "pod-5d0d4a13-f440-4317-8208-e87406b97bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011888639s STEP: Saw pod success Mar 9 00:41:26.954: INFO: Pod "pod-5d0d4a13-f440-4317-8208-e87406b97bc9" satisfied condition "success or failure" Mar 9 00:41:26.957: INFO: Trying to get logs from node latest-worker pod pod-5d0d4a13-f440-4317-8208-e87406b97bc9 container test-container: STEP: delete the pod Mar 9 00:41:26.990: INFO: Waiting for pod pod-5d0d4a13-f440-4317-8208-e87406b97bc9 to disappear Mar 9 00:41:27.003: INFO: Pod pod-5d0d4a13-f440-4317-8208-e87406b97bc9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:27.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9667" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4435,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:27.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7353 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 00:41:27.092: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 9 00:41:27.134: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:41:29.137: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 9 00:41:31.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:33.138: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:35.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:37.138: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 9 00:41:39.137: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 9 00:41:39.143: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 9 00:41:41.147: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 9 00:41:43.147: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 9 00:41:45.147: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 9 00:41:49.193: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.230:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:41:49.193: INFO: >>> kubeConfig: /root/.kube/config I0309 00:41:49.229722 7 log.go:172] (0xc001dea0b0) (0xc000b741e0) Create stream I0309 00:41:49.229756 7 log.go:172] (0xc001dea0b0) (0xc000b741e0) Stream added, broadcasting: 1 I0309 00:41:49.231766 7 log.go:172] (0xc001dea0b0) Reply frame received for 1 I0309 00:41:49.231807 7 log.go:172] (0xc001dea0b0) (0xc000b74640) Create stream I0309 00:41:49.231820 7 log.go:172] (0xc001dea0b0) (0xc000b74640) Stream added, broadcasting: 3 I0309 00:41:49.232776 7 log.go:172] (0xc001dea0b0) Reply frame received for 3 I0309 00:41:49.232812 7 log.go:172] (0xc001dea0b0) (0xc001f805a0) Create stream I0309 00:41:49.232822 7 log.go:172] (0xc001dea0b0) (0xc001f805a0) Stream added, broadcasting: 5 I0309 00:41:49.233985 7 log.go:172] (0xc001dea0b0) Reply frame received for 5 I0309 00:41:49.330734 7 log.go:172] (0xc001dea0b0) Data frame received for 5 I0309 00:41:49.330765 7 log.go:172] (0xc001f805a0) (5) Data frame handling I0309 00:41:49.330785 7 log.go:172] (0xc001dea0b0) Data frame received for 3 I0309 00:41:49.330799 7 log.go:172] (0xc000b74640) (3) Data frame handling I0309 00:41:49.330816 7 log.go:172] (0xc000b74640) (3) Data frame sent I0309 00:41:49.331290 7 log.go:172] (0xc001dea0b0) Data frame received for 3 I0309 00:41:49.331310 7 log.go:172] (0xc000b74640) (3) Data frame handling I0309 00:41:49.332948 7 log.go:172] (0xc001dea0b0) Data frame received for 1 I0309 00:41:49.332973 7 log.go:172] (0xc000b741e0) (1) Data frame handling I0309 00:41:49.332987 7 log.go:172] (0xc000b741e0) (1) Data frame sent I0309 00:41:49.333000 7 log.go:172] (0xc001dea0b0) (0xc000b741e0) Stream removed, broadcasting: 1 I0309 00:41:49.333017 7 log.go:172] (0xc001dea0b0) Go away received I0309 00:41:49.333147 7 log.go:172] (0xc001dea0b0) (0xc000b741e0) Stream removed, broadcasting: 1 I0309 00:41:49.333164 7 log.go:172] (0xc001dea0b0) (0xc000b74640) Stream removed, broadcasting: 3 I0309 00:41:49.333172 7 log.go:172] (0xc001dea0b0) (0xc001f805a0) Stream removed, broadcasting: 5 Mar 9 00:41:49.333: INFO: Found all expected endpoints: [netserver-0] Mar 9 00:41:49.336: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.150:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 00:41:49.336: INFO: >>> kubeConfig: /root/.kube/config I0309 00:41:49.359245 7 log.go:172] (0xc0020fb130) (0xc0016f86e0) Create stream I0309 00:41:49.359278 7 log.go:172] (0xc0020fb130) (0xc0016f86e0) Stream added, broadcasting: 1 I0309 00:41:49.360708 7 log.go:172] (0xc0020fb130) Reply frame received for 1 I0309 00:41:49.360742 7 log.go:172] (0xc0020fb130) (0xc001f80640) Create stream I0309 00:41:49.360756 7 log.go:172] (0xc0020fb130) (0xc001f80640) Stream added, broadcasting: 3 I0309 00:41:49.361597 7 log.go:172] (0xc0020fb130) Reply frame received for 3 I0309 00:41:49.361628 7 log.go:172] (0xc0020fb130) (0xc001f80780) Create stream I0309 00:41:49.361642 7 log.go:172] (0xc0020fb130) (0xc001f80780) Stream added, broadcasting: 5 I0309 00:41:49.362401 7 log.go:172] (0xc0020fb130) Reply frame received for 5 I0309 00:41:49.412978 7 log.go:172] (0xc0020fb130) Data frame received for 3 I0309 00:41:49.413014 7 log.go:172] (0xc001f80640) (3) Data frame handling I0309 00:41:49.413058 7 log.go:172] (0xc001f80640) (3) Data frame sent I0309 00:41:49.413074 7 log.go:172] (0xc0020fb130) Data frame received for 3 I0309 00:41:49.413088 7 log.go:172] (0xc001f80640) (3) Data frame handling I0309 00:41:49.413102 7 log.go:172] (0xc0020fb130) Data frame received for 5 I0309 00:41:49.413115 7 log.go:172] (0xc001f80780) (5) Data frame handling I0309 00:41:49.414480 7 log.go:172] (0xc0020fb130) Data frame received for 1 I0309 00:41:49.414503 7 log.go:172] (0xc0016f86e0) (1) Data frame handling I0309 00:41:49.414524 7 log.go:172] (0xc0016f86e0) (1) Data frame sent I0309 00:41:49.414538 7 log.go:172] (0xc0020fb130) (0xc0016f86e0) Stream removed, broadcasting: 1 I0309 00:41:49.414559 7 log.go:172] (0xc0020fb130) Go away received I0309 00:41:49.414633 7 log.go:172] (0xc0020fb130) (0xc0016f86e0) Stream removed, broadcasting: 1 I0309 00:41:49.414649 7 log.go:172] (0xc0020fb130) (0xc001f80640) Stream removed, broadcasting: 3 I0309 00:41:49.414658 7 log.go:172] (0xc0020fb130) (0xc001f80780) Stream removed, broadcasting: 5 Mar 9 00:41:49.414: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:41:49.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7353" for this suite. • [SLOW TEST:22.412 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4449,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:41:49.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8166.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8166.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8166.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 00:41:53.537: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.541: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.545: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.548: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.558: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.561: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.564: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.568: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:53.574: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:41:58.613: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.617: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.620: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.622: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.630: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.633: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.635: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.637: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:41:58.643: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:42:03.581: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.584: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.589: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.600: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.603: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.605: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.608: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:03.614: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:42:08.581: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.584: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.589: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.598: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.601: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.604: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.606: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:08.610: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:42:13.579: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.582: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.588: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.591: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.603: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.606: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.608: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.611: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:13.617: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:42:18.626: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.629: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.635: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.645: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.647: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.650: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.653: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local from pod dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754: the server could not find the requested resource (get pods dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754) Mar 9 00:42:18.657: INFO: Lookups using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8166.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8166.svc.cluster.local jessie_udp@dns-test-service-2.dns-8166.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8166.svc.cluster.local] Mar 9 00:42:23.633: INFO: DNS probes using dns-8166/dns-test-60c8ebb4-6e12-4b39-941c-0f4f4bd8a754 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:42:23.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8166" for this suite. • [SLOW TEST:34.338 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":272,"skipped":4455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:42:23.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 9 00:42:23.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e" in namespace "projected-6949" to be "success or failure" Mar 9 00:42:23.846: INFO: Pod "downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.977174ms Mar 9 00:42:25.850: INFO: Pod "downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015983132s STEP: Saw pod success Mar 9 00:42:25.850: INFO: Pod "downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e" satisfied condition "success or failure" Mar 9 00:42:25.853: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e container client-container: STEP: delete the pod Mar 9 00:42:25.891: INFO: Waiting for pod downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e to disappear Mar 9 00:42:25.902: INFO: Pod downwardapi-volume-5a8136f2-30dc-4f9a-86ba-15f8aec2b53e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:42:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6949" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4487,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:42:25.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-nqct STEP: Creating a pod to test atomic-volume-subpath Mar 9 00:42:26.030: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nqct" in namespace "subpath-2237" to be "success or failure" Mar 9 00:42:26.034: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414579ms Mar 9 00:42:28.038: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 2.008503271s Mar 9 00:42:30.043: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 4.01254689s Mar 9 00:42:32.047: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 6.016551075s Mar 9 00:42:34.051: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 8.020604824s Mar 9 00:42:36.055: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 10.024577119s Mar 9 00:42:38.058: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 12.028290614s Mar 9 00:42:40.062: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 14.031923102s Mar 9 00:42:42.066: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 16.035649068s Mar 9 00:42:44.070: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 18.039618916s Mar 9 00:42:46.073: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Running", Reason="", readiness=true. Elapsed: 20.043295947s Mar 9 00:42:48.077: INFO: Pod "pod-subpath-test-configmap-nqct": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047107114s STEP: Saw pod success Mar 9 00:42:48.077: INFO: Pod "pod-subpath-test-configmap-nqct" satisfied condition "success or failure" Mar 9 00:42:48.080: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-nqct container test-container-subpath-configmap-nqct: STEP: delete the pod Mar 9 00:42:48.097: INFO: Waiting for pod pod-subpath-test-configmap-nqct to disappear Mar 9 00:42:48.102: INFO: Pod pod-subpath-test-configmap-nqct no longer exists STEP: Deleting pod pod-subpath-test-configmap-nqct Mar 9 00:42:48.102: INFO: Deleting pod "pod-subpath-test-configmap-nqct" in namespace "subpath-2237" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:42:48.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2237" for this suite. • [SLOW TEST:22.197 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":274,"skipped":4493,"failed":0} S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:42:48.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 9 00:42:48.182: INFO: Waiting up to 5m0s for pod "downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4" in namespace "downward-api-8115" to be "success or failure" Mar 9 00:42:48.186: INFO: Pod "downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949369ms Mar 9 00:42:50.190: INFO: Pod "downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007712529s STEP: Saw pod success Mar 9 00:42:50.190: INFO: Pod "downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4" satisfied condition "success or failure" Mar 9 00:42:50.192: INFO: Trying to get logs from node latest-worker pod downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4 container dapi-container: STEP: delete the pod Mar 9 00:42:50.227: INFO: Waiting for pod downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4 to disappear Mar 9 00:42:50.230: INFO: Pod downward-api-fc474f16-f7a9-4590-aaaa-f0155a3376f4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:42:50.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8115" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:42:50.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4936 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-4936 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4936 Mar 9 00:42:50.317: INFO: Found 0 stateful pods, waiting for 1 Mar 9 00:43:00.321: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 9 00:43:00.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:43:00.552: INFO: stderr: "I0309 00:43:00.456807 3742 log.go:172] (0xc000020fd0) (0xc0008b4000) Create stream\nI0309 00:43:00.456858 3742 log.go:172] (0xc000020fd0) (0xc0008b4000) Stream added, broadcasting: 1\nI0309 00:43:00.464015 3742 log.go:172] (0xc000020fd0) Reply frame received for 1\nI0309 00:43:00.464049 3742 log.go:172] (0xc000020fd0) (0xc000a60000) Create stream\nI0309 00:43:00.464060 3742 log.go:172] (0xc000020fd0) (0xc000a60000) Stream added, broadcasting: 3\nI0309 00:43:00.465041 3742 log.go:172] (0xc000020fd0) Reply frame received for 3\nI0309 00:43:00.465088 3742 log.go:172] (0xc000020fd0) (0xc0008b40a0) Create stream\nI0309 00:43:00.465103 3742 log.go:172] (0xc000020fd0) (0xc0008b40a0) Stream added, broadcasting: 5\nI0309 00:43:00.465990 3742 log.go:172] (0xc000020fd0) Reply frame received for 5\nI0309 00:43:00.529000 3742 log.go:172] (0xc000020fd0) Data frame received for 5\nI0309 00:43:00.529029 3742 log.go:172] (0xc0008b40a0) (5) Data frame handling\nI0309 00:43:00.529045 3742 log.go:172] (0xc0008b40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:43:00.547103 3742 log.go:172] (0xc000020fd0) Data frame received for 3\nI0309 00:43:00.547128 3742 log.go:172] (0xc000a60000) (3) Data frame handling\nI0309 00:43:00.547144 3742 log.go:172] (0xc000a60000) (3) Data frame sent\nI0309 00:43:00.547152 3742 log.go:172] (0xc000020fd0) Data frame received for 3\nI0309 00:43:00.547157 3742 log.go:172] (0xc000a60000) (3) Data frame handling\nI0309 00:43:00.547427 3742 log.go:172] (0xc000020fd0) Data frame received for 5\nI0309 00:43:00.547452 3742 log.go:172] (0xc0008b40a0) (5) Data frame handling\nI0309 00:43:00.549086 3742 log.go:172] (0xc000020fd0) Data frame received for 1\nI0309 00:43:00.549107 3742 log.go:172] (0xc0008b4000) (1) Data frame handling\nI0309 00:43:00.549123 3742 log.go:172] (0xc0008b4000) (1) Data frame sent\nI0309 00:43:00.549146 3742 log.go:172] (0xc000020fd0) (0xc0008b4000) Stream removed, broadcasting: 1\nI0309 00:43:00.549190 3742 log.go:172] (0xc000020fd0) Go away received\nI0309 00:43:00.549518 3742 log.go:172] (0xc000020fd0) (0xc0008b4000) Stream removed, broadcasting: 1\nI0309 00:43:00.549538 3742 log.go:172] (0xc000020fd0) (0xc000a60000) Stream removed, broadcasting: 3\nI0309 00:43:00.549550 3742 log.go:172] (0xc000020fd0) (0xc0008b40a0) Stream removed, broadcasting: 5\n" Mar 9 00:43:00.552: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:43:00.552: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:43:00.556: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 9 00:43:10.560: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:43:10.560: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:43:10.581: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:10.581: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:10.581: INFO: Mar 9 00:43:10.581: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 9 00:43:11.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987331688s Mar 9 00:43:12.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98292575s Mar 9 00:43:13.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978652161s Mar 9 00:43:14.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973894028s Mar 9 00:43:15.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969206345s Mar 9 00:43:16.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964485717s Mar 9 00:43:17.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959666716s Mar 9 00:43:18.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954166298s Mar 9 00:43:19.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.571928ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4936 Mar 9 00:43:20.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:43:22.525: INFO: stderr: "I0309 00:43:22.461830 3763 log.go:172] (0xc00003abb0) (0xc0006d5e00) Create stream\nI0309 00:43:22.461863 3763 log.go:172] (0xc00003abb0) (0xc0006d5e00) Stream added, broadcasting: 1\nI0309 00:43:22.464451 3763 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0309 00:43:22.464487 3763 log.go:172] (0xc00003abb0) (0xc000726000) Create stream\nI0309 00:43:22.464499 3763 log.go:172] (0xc00003abb0) (0xc000726000) Stream added, broadcasting: 3\nI0309 00:43:22.465535 3763 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0309 00:43:22.465595 3763 log.go:172] (0xc00003abb0) (0xc000758000) Create stream\nI0309 00:43:22.465614 3763 log.go:172] (0xc00003abb0) (0xc000758000) Stream added, broadcasting: 5\nI0309 00:43:22.466662 3763 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0309 00:43:22.518777 3763 log.go:172] (0xc00003abb0) Data frame received for 5\nI0309 00:43:22.518818 3763 log.go:172] (0xc000758000) (5) Data frame handling\nI0309 00:43:22.518838 3763 log.go:172] (0xc000758000) (5) Data frame sent\nI0309 00:43:22.518859 3763 log.go:172] (0xc00003abb0) Data frame received for 5\nI0309 00:43:22.518871 3763 log.go:172] (0xc000758000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 00:43:22.518898 3763 log.go:172] (0xc00003abb0) Data frame received for 3\nI0309 00:43:22.518928 3763 log.go:172] (0xc000726000) (3) Data frame handling\nI0309 00:43:22.518946 3763 log.go:172] (0xc000726000) (3) Data frame sent\nI0309 00:43:22.518962 3763 log.go:172] (0xc00003abb0) Data frame received for 3\nI0309 00:43:22.518979 3763 log.go:172] (0xc000726000) (3) Data frame handling\nI0309 00:43:22.520200 3763 log.go:172] (0xc00003abb0) Data frame received for 1\nI0309 00:43:22.520222 3763 log.go:172] (0xc0006d5e00) (1) Data frame handling\nI0309 00:43:22.520242 3763 log.go:172] (0xc0006d5e00) (1) Data frame sent\nI0309 00:43:22.520272 3763 log.go:172] (0xc00003abb0) (0xc0006d5e00) Stream removed, broadcasting: 1\nI0309 00:43:22.520281 3763 log.go:172] (0xc00003abb0) Go away received\nI0309 00:43:22.520712 3763 log.go:172] (0xc00003abb0) (0xc0006d5e00) Stream removed, broadcasting: 1\nI0309 00:43:22.520733 3763 log.go:172] (0xc00003abb0) (0xc000726000) Stream removed, broadcasting: 3\nI0309 00:43:22.520745 3763 log.go:172] (0xc00003abb0) (0xc000758000) Stream removed, broadcasting: 5\n" Mar 9 00:43:22.525: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:43:22.525: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:43:22.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:43:22.765: INFO: stderr: "I0309 00:43:22.683205 3793 log.go:172] (0xc0009ac000) (0xc000af0000) Create stream\nI0309 00:43:22.683253 3793 log.go:172] (0xc0009ac000) (0xc000af0000) Stream added, broadcasting: 1\nI0309 00:43:22.686009 3793 log.go:172] (0xc0009ac000) Reply frame received for 1\nI0309 00:43:22.686038 3793 log.go:172] (0xc0009ac000) (0xc000613ae0) Create stream\nI0309 00:43:22.686046 3793 log.go:172] (0xc0009ac000) (0xc000613ae0) Stream added, broadcasting: 3\nI0309 00:43:22.686876 3793 log.go:172] (0xc0009ac000) Reply frame received for 3\nI0309 00:43:22.686900 3793 log.go:172] (0xc0009ac000) (0xc000723400) Create stream\nI0309 00:43:22.686909 3793 log.go:172] (0xc0009ac000) (0xc000723400) Stream added, broadcasting: 5\nI0309 00:43:22.687858 3793 log.go:172] (0xc0009ac000) Reply frame received for 5\nI0309 00:43:22.761381 3793 log.go:172] (0xc0009ac000) Data frame received for 5\nI0309 00:43:22.761403 3793 log.go:172] (0xc000723400) (5) Data frame handling\nI0309 00:43:22.761411 3793 log.go:172] (0xc000723400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0309 00:43:22.761434 3793 log.go:172] (0xc0009ac000) Data frame received for 5\nI0309 00:43:22.761441 3793 log.go:172] (0xc000723400) (5) Data frame handling\nI0309 00:43:22.761455 3793 log.go:172] (0xc0009ac000) Data frame received for 3\nI0309 00:43:22.761462 3793 log.go:172] (0xc000613ae0) (3) Data frame handling\nI0309 00:43:22.761469 3793 log.go:172] (0xc000613ae0) (3) Data frame sent\nI0309 00:43:22.761474 3793 log.go:172] (0xc0009ac000) Data frame received for 3\nI0309 00:43:22.761482 3793 log.go:172] (0xc000613ae0) (3) Data frame handling\nI0309 00:43:22.762456 3793 log.go:172] (0xc0009ac000) Data frame received for 1\nI0309 00:43:22.762478 3793 log.go:172] (0xc000af0000) (1) Data frame handling\nI0309 00:43:22.762491 3793 log.go:172] (0xc000af0000) (1) Data frame sent\nI0309 00:43:22.762505 3793 log.go:172] (0xc0009ac000) (0xc000af0000) Stream removed, broadcasting: 1\nI0309 00:43:22.762520 3793 log.go:172] (0xc0009ac000) Go away received\nI0309 00:43:22.762838 3793 log.go:172] (0xc0009ac000) (0xc000af0000) Stream removed, broadcasting: 1\nI0309 00:43:22.762855 3793 log.go:172] (0xc0009ac000) (0xc000613ae0) Stream removed, broadcasting: 3\nI0309 00:43:22.762861 3793 log.go:172] (0xc0009ac000) (0xc000723400) Stream removed, broadcasting: 5\n" Mar 9 00:43:22.765: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:43:22.765: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:43:22.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 00:43:22.962: INFO: stderr: "I0309 00:43:22.896312 3813 log.go:172] (0xc0000e0bb0) (0xc000703e00) Create stream\nI0309 00:43:22.896362 3813 log.go:172] (0xc0000e0bb0) (0xc000703e00) Stream added, broadcasting: 1\nI0309 00:43:22.898593 3813 log.go:172] (0xc0000e0bb0) Reply frame received for 1\nI0309 00:43:22.898628 3813 log.go:172] (0xc0000e0bb0) (0xc0006846e0) Create stream\nI0309 00:43:22.898640 3813 log.go:172] (0xc0000e0bb0) (0xc0006846e0) Stream added, broadcasting: 3\nI0309 00:43:22.899514 3813 log.go:172] (0xc0000e0bb0) Reply frame received for 3\nI0309 00:43:22.899546 3813 log.go:172] (0xc0000e0bb0) (0xc000551360) Create stream\nI0309 00:43:22.899559 3813 log.go:172] (0xc0000e0bb0) (0xc000551360) Stream added, broadcasting: 5\nI0309 00:43:22.900368 3813 log.go:172] (0xc0000e0bb0) Reply frame received for 5\nI0309 00:43:22.957164 3813 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0309 00:43:22.957210 3813 log.go:172] (0xc0006846e0) (3) Data frame handling\nI0309 00:43:22.957224 3813 log.go:172] (0xc0006846e0) (3) Data frame sent\nI0309 00:43:22.957235 3813 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0309 00:43:22.957245 3813 log.go:172] (0xc0006846e0) (3) Data frame handling\nI0309 00:43:22.957274 3813 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0309 00:43:22.957290 3813 log.go:172] (0xc000551360) (5) Data frame handling\nI0309 00:43:22.957308 3813 log.go:172] (0xc000551360) (5) Data frame sent\nI0309 00:43:22.957322 3813 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0309 00:43:22.957336 3813 log.go:172] (0xc000551360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0309 00:43:22.958645 3813 log.go:172] (0xc0000e0bb0) Data frame received for 1\nI0309 00:43:22.958661 3813 log.go:172] (0xc000703e00) (1) Data frame handling\nI0309 00:43:22.958677 3813 log.go:172] (0xc000703e00) (1) Data frame sent\nI0309 00:43:22.958687 3813 log.go:172] (0xc0000e0bb0) (0xc000703e00) Stream removed, broadcasting: 1\nI0309 00:43:22.958697 3813 log.go:172] (0xc0000e0bb0) Go away received\nI0309 00:43:22.958974 3813 log.go:172] (0xc0000e0bb0) (0xc000703e00) Stream removed, broadcasting: 1\nI0309 00:43:22.958987 3813 log.go:172] (0xc0000e0bb0) (0xc0006846e0) Stream removed, broadcasting: 3\nI0309 00:43:22.958997 3813 log.go:172] (0xc0000e0bb0) (0xc000551360) Stream removed, broadcasting: 5\n" Mar 9 00:43:22.962: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 00:43:22.962: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 00:43:22.965: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 00:43:22.965: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 00:43:22.965: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 9 00:43:22.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:43:23.129: INFO: stderr: "I0309 00:43:23.072216 3833 log.go:172] (0xc000a346e0) (0xc000697cc0) Create stream\nI0309 00:43:23.072257 3833 log.go:172] (0xc000a346e0) (0xc000697cc0) Stream added, broadcasting: 1\nI0309 00:43:23.074583 3833 log.go:172] (0xc000a346e0) Reply frame received for 1\nI0309 00:43:23.074627 3833 log.go:172] (0xc000a346e0) (0xc000a46000) Create stream\nI0309 00:43:23.074640 3833 log.go:172] (0xc000a346e0) (0xc000a46000) Stream added, broadcasting: 3\nI0309 00:43:23.075382 3833 log.go:172] (0xc000a346e0) Reply frame received for 3\nI0309 00:43:23.075408 3833 log.go:172] (0xc000a346e0) (0xc000a460a0) Create stream\nI0309 00:43:23.075416 3833 log.go:172] (0xc000a346e0) (0xc000a460a0) Stream added, broadcasting: 5\nI0309 00:43:23.076196 3833 log.go:172] (0xc000a346e0) Reply frame received for 5\nI0309 00:43:23.124799 3833 log.go:172] (0xc000a346e0) Data frame received for 5\nI0309 00:43:23.124839 3833 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0309 00:43:23.124854 3833 log.go:172] (0xc000a460a0) (5) Data frame sent\nI0309 00:43:23.124866 3833 log.go:172] (0xc000a346e0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:43:23.124879 3833 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0309 00:43:23.124901 3833 log.go:172] (0xc000a346e0) Data frame received for 3\nI0309 00:43:23.124915 3833 log.go:172] (0xc000a46000) (3) Data frame handling\nI0309 00:43:23.124936 3833 log.go:172] (0xc000a46000) (3) Data frame sent\nI0309 00:43:23.124959 3833 log.go:172] (0xc000a346e0) Data frame received for 3\nI0309 00:43:23.124981 3833 log.go:172] (0xc000a46000) (3) Data frame handling\nI0309 00:43:23.125842 3833 log.go:172] (0xc000a346e0) Data frame received for 1\nI0309 00:43:23.125858 3833 log.go:172] (0xc000697cc0) (1) Data frame handling\nI0309 00:43:23.125876 3833 log.go:172] (0xc000697cc0) (1) Data frame sent\nI0309 00:43:23.125898 3833 log.go:172] (0xc000a346e0) (0xc000697cc0) Stream removed, broadcasting: 1\nI0309 00:43:23.125912 3833 log.go:172] (0xc000a346e0) Go away received\nI0309 00:43:23.126322 3833 log.go:172] (0xc000a346e0) (0xc000697cc0) Stream removed, broadcasting: 1\nI0309 00:43:23.126341 3833 log.go:172] (0xc000a346e0) (0xc000a46000) Stream removed, broadcasting: 3\nI0309 00:43:23.126349 3833 log.go:172] (0xc000a346e0) (0xc000a460a0) Stream removed, broadcasting: 5\n" Mar 9 00:43:23.129: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:43:23.129: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:43:23.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:43:23.353: INFO: stderr: "I0309 00:43:23.256600 3855 log.go:172] (0xc0008dabb0) (0xc000746aa0) Create stream\nI0309 00:43:23.256634 3855 log.go:172] (0xc0008dabb0) (0xc000746aa0) Stream added, broadcasting: 1\nI0309 00:43:23.259102 3855 log.go:172] (0xc0008dabb0) Reply frame received for 1\nI0309 00:43:23.259130 3855 log.go:172] (0xc0008dabb0) (0xc0007d5d60) Create stream\nI0309 00:43:23.259139 3855 log.go:172] (0xc0008dabb0) (0xc0007d5d60) Stream added, broadcasting: 3\nI0309 00:43:23.260195 3855 log.go:172] (0xc0008dabb0) Reply frame received for 3\nI0309 00:43:23.260222 3855 log.go:172] (0xc0008dabb0) (0xc000707c20) Create stream\nI0309 00:43:23.260234 3855 log.go:172] (0xc0008dabb0) (0xc000707c20) Stream added, broadcasting: 5\nI0309 00:43:23.262009 3855 log.go:172] (0xc0008dabb0) Reply frame received for 5\nI0309 00:43:23.332241 3855 log.go:172] (0xc0008dabb0) Data frame received for 5\nI0309 00:43:23.332261 3855 log.go:172] (0xc000707c20) (5) Data frame handling\nI0309 00:43:23.332289 3855 log.go:172] (0xc000707c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:43:23.349315 3855 log.go:172] (0xc0008dabb0) Data frame received for 3\nI0309 00:43:23.349333 3855 log.go:172] (0xc0007d5d60) (3) Data frame handling\nI0309 00:43:23.349360 3855 log.go:172] (0xc0007d5d60) (3) Data frame sent\nI0309 00:43:23.349440 3855 log.go:172] (0xc0008dabb0) Data frame received for 5\nI0309 00:43:23.349472 3855 log.go:172] (0xc0008dabb0) Data frame received for 3\nI0309 00:43:23.349490 3855 log.go:172] (0xc0007d5d60) (3) Data frame handling\nI0309 00:43:23.349522 3855 log.go:172] (0xc000707c20) (5) Data frame handling\nI0309 00:43:23.350757 3855 log.go:172] (0xc0008dabb0) Data frame received for 1\nI0309 00:43:23.350790 3855 log.go:172] (0xc000746aa0) (1) Data frame handling\nI0309 00:43:23.350807 3855 log.go:172] (0xc000746aa0) (1) Data frame sent\nI0309 00:43:23.350828 3855 log.go:172] (0xc0008dabb0) (0xc000746aa0) Stream removed, broadcasting: 1\nI0309 00:43:23.350846 3855 log.go:172] (0xc0008dabb0) Go away received\nI0309 00:43:23.351128 3855 log.go:172] (0xc0008dabb0) (0xc000746aa0) Stream removed, broadcasting: 1\nI0309 00:43:23.351144 3855 log.go:172] (0xc0008dabb0) (0xc0007d5d60) Stream removed, broadcasting: 3\nI0309 00:43:23.351151 3855 log.go:172] (0xc0008dabb0) (0xc000707c20) Stream removed, broadcasting: 5\n" Mar 9 00:43:23.353: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:43:23.353: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:43:23.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4936 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 00:43:23.547: INFO: stderr: "I0309 00:43:23.459999 3876 log.go:172] (0xc000756b00) (0xc000738000) Create stream\nI0309 00:43:23.460052 3876 log.go:172] (0xc000756b00) (0xc000738000) Stream added, broadcasting: 1\nI0309 00:43:23.462929 3876 log.go:172] (0xc000756b00) Reply frame received for 1\nI0309 00:43:23.462964 3876 log.go:172] (0xc000756b00) (0xc000701ae0) Create stream\nI0309 00:43:23.462974 3876 log.go:172] (0xc000756b00) (0xc000701ae0) Stream added, broadcasting: 3\nI0309 00:43:23.463767 3876 log.go:172] (0xc000756b00) Reply frame received for 3\nI0309 00:43:23.463805 3876 log.go:172] (0xc000756b00) (0xc000202000) Create stream\nI0309 00:43:23.463821 3876 log.go:172] (0xc000756b00) (0xc000202000) Stream added, broadcasting: 5\nI0309 00:43:23.464826 3876 log.go:172] (0xc000756b00) Reply frame received for 5\nI0309 00:43:23.521147 3876 log.go:172] (0xc000756b00) Data frame received for 5\nI0309 00:43:23.521171 3876 log.go:172] (0xc000202000) (5) Data frame handling\nI0309 00:43:23.521190 3876 log.go:172] (0xc000202000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 00:43:23.542205 3876 log.go:172] (0xc000756b00) Data frame received for 3\nI0309 00:43:23.542230 3876 log.go:172] (0xc000701ae0) (3) Data frame handling\nI0309 00:43:23.542247 3876 log.go:172] (0xc000701ae0) (3) Data frame sent\nI0309 00:43:23.542690 3876 log.go:172] (0xc000756b00) Data frame received for 5\nI0309 00:43:23.542723 3876 log.go:172] (0xc000202000) (5) Data frame handling\nI0309 00:43:23.542893 3876 log.go:172] (0xc000756b00) Data frame received for 3\nI0309 00:43:23.542911 3876 log.go:172] (0xc000701ae0) (3) Data frame handling\nI0309 00:43:23.544037 3876 log.go:172] (0xc000756b00) Data frame received for 1\nI0309 00:43:23.544060 3876 log.go:172] (0xc000738000) (1) Data frame handling\nI0309 00:43:23.544071 3876 log.go:172] (0xc000738000) (1) Data frame sent\nI0309 00:43:23.544082 3876 log.go:172] (0xc000756b00) (0xc000738000) Stream removed, broadcasting: 1\nI0309 00:43:23.544094 3876 log.go:172] (0xc000756b00) Go away received\nI0309 00:43:23.544421 3876 log.go:172] (0xc000756b00) (0xc000738000) Stream removed, broadcasting: 1\nI0309 00:43:23.544441 3876 log.go:172] (0xc000756b00) (0xc000701ae0) Stream removed, broadcasting: 3\nI0309 00:43:23.544451 3876 log.go:172] (0xc000756b00) (0xc000202000) Stream removed, broadcasting: 5\n" Mar 9 00:43:23.547: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 00:43:23.547: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 00:43:23.547: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:43:23.560: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 9 00:43:33.566: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:43:33.566: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:43:33.566: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 9 00:43:33.604: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:33.604: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:33.604: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:33.604: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:33.604: INFO: Mar 9 00:43:33.604: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:34.608: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:34.608: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:34.608: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:34.608: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:34.608: INFO: Mar 9 00:43:34.608: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:35.613: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:35.613: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:35.613: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:35.613: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:35.613: INFO: Mar 9 00:43:35.613: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:36.617: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:36.617: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:36.617: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:36.617: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:36.617: INFO: Mar 9 00:43:36.617: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:37.623: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:37.623: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:37.623: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:37.623: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:37.623: INFO: Mar 9 00:43:37.623: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:38.627: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:38.628: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:38.628: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:38.628: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:38.628: INFO: Mar 9 00:43:38.628: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:39.632: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:39.632: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:39.632: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:39.632: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:39.632: INFO: Mar 9 00:43:39.632: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:40.637: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:40.637: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:40.637: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:40.637: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:40.637: INFO: Mar 9 00:43:40.637: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:41.642: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 00:43:41.642: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:42:50 +0000 UTC }] Mar 9 00:43:41.642: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:41.642: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 00:43:10 +0000 UTC }] Mar 9 00:43:41.642: INFO: Mar 9 00:43:41.642: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 00:43:42.645: INFO: Verifying statefulset ss doesn't scale past 0 for another 930.535892ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4936 Mar 9 00:43:43.648: INFO: Scaling statefulset ss to 0 Mar 9 00:43:43.657: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 9 00:43:43.660: INFO: Deleting all statefulset in ns statefulset-4936 Mar 9 00:43:43.662: INFO: Scaling statefulset ss to 0 Mar 9 00:43:43.670: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 00:43:43.673: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:43:43.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4936" for this suite. • [SLOW TEST:53.458 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":276,"skipped":4519,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:43:43.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Mar 9 00:43:43.815: INFO: Waiting up to 5m0s for pod "client-containers-750216a0-6b64-4459-b031-b04104637289" in namespace "containers-6575" to be "success or failure" Mar 9 00:43:43.822: INFO: Pod "client-containers-750216a0-6b64-4459-b031-b04104637289": Phase="Pending", Reason="", readiness=false. Elapsed: 7.640443ms Mar 9 00:43:45.826: INFO: Pod "client-containers-750216a0-6b64-4459-b031-b04104637289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010989854s STEP: Saw pod success Mar 9 00:43:45.826: INFO: Pod "client-containers-750216a0-6b64-4459-b031-b04104637289" satisfied condition "success or failure" Mar 9 00:43:45.828: INFO: Trying to get logs from node latest-worker pod client-containers-750216a0-6b64-4459-b031-b04104637289 container test-container: STEP: delete the pod Mar 9 00:43:45.872: INFO: Waiting for pod client-containers-750216a0-6b64-4459-b031-b04104637289 to disappear Mar 9 00:43:45.876: INFO: Pod client-containers-750216a0-6b64-4459-b031-b04104637289 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:43:45.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6575" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":277,"skipped":4522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:43:45.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65 Mar 9 00:43:46.017: INFO: Pod name my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65: Found 0 pods out of 1 Mar 9 00:43:51.045: INFO: Pod name my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65: Found 1 pods out of 1 Mar 9 00:43:51.046: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65" are running Mar 9 00:43:51.048: INFO: Pod "my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65-t4tzk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:43:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:43:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:43:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 00:43:46 +0000 UTC Reason: Message:}]) Mar 9 00:43:51.048: INFO: Trying to dial the pod Mar 9 00:43:56.060: INFO: Controller my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65: Got expected result from replica 1 [my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65-t4tzk]: "my-hostname-basic-97cc782d-2365-492b-9264-fdcc1dd08b65-t4tzk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:43:56.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8541" for this suite. • [SLOW TEST:10.187 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":278,"skipped":4547,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:43:56.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-b086416c-563f-433a-8a91-572877138d66 STEP: Creating a pod to test consume configMaps Mar 9 00:43:56.163: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2" in namespace "configmap-1696" to be "success or failure" Mar 9 00:43:56.181: INFO: Pod "pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.575575ms Mar 9 00:43:58.184: INFO: Pod "pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021239608s STEP: Saw pod success Mar 9 00:43:58.184: INFO: Pod "pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2" satisfied condition "success or failure" Mar 9 00:43:58.187: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2 container configmap-volume-test: STEP: delete the pod Mar 9 00:43:58.211: INFO: Waiting for pod pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2 to disappear Mar 9 00:43:58.215: INFO: Pod pod-configmaps-7d945418-4cf6-4866-8492-0ad326beb7e2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:43:58.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1696" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4550,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 9 00:43:58.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 00:43:58.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9030' Mar 9 00:43:58.356: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 00:43:58.356: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Mar 9 00:43:58.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9030' Mar 9 00:43:58.488: INFO: stderr: "" Mar 9 00:43:58.488: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 9 00:43:58.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9030" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":280,"skipped":4562,"failed":0} SSSMar 9 00:43:58.494: INFO: Running AfterSuite actions on all nodes Mar 9 00:43:58.494: INFO: Running AfterSuite actions on node 1 Mar 9 00:43:58.494: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":280,"completed":280,"skipped":4565,"failed":0} Ran 280 of 4845 Specs in 4022.396 seconds SUCCESS! -- 280 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS