I0504 23:37:34.864453 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0504 23:37:34.864675 7 e2e.go:129] Starting e2e run "489944c9-0611-4199-9228-6b72f20447c1" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588635453 - Will randomize all specs Will run 288 of 5094 specs May 4 23:37:34.916: INFO: >>> kubeConfig: /root/.kube/config May 4 23:37:34.918: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 4 23:37:34.940: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 4 23:37:34.970: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 4 23:37:34.970: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 4 23:37:34.970: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 4 23:37:34.980: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 4 23:37:34.980: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 4 23:37:34.980: INFO: e2e test version: v1.19.0-alpha.2.298+0bcbe384d866b9 May 4 23:37:34.981: INFO: kube-apiserver version: v1.18.2 May 4 23:37:34.981: INFO: >>> kubeConfig: /root/.kube/config May 4 23:37:34.985: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:37:34.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 4 23:37:35.033: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-7d21b26b-2ab7-4fb1-81d9-25d8cb27b5e9 STEP: Creating a pod to test consume configMaps May 4 23:37:35.072: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7" in namespace "projected-2425" to be "Succeeded or Failed" May 4 23:37:35.098: INFO: Pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.248151ms May 4 23:37:37.126: INFO: Pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053600907s May 4 23:37:39.130: INFO: Pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.057707909s May 4 23:37:41.153: INFO: Pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080557599s STEP: Saw pod success May 4 23:37:41.153: INFO: Pod "pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7" satisfied condition "Succeeded or Failed" May 4 23:37:41.155: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7 container projected-configmap-volume-test: STEP: delete the pod May 4 23:37:41.207: INFO: Waiting for pod pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7 to disappear May 4 23:37:41.256: INFO: Pod pod-projected-configmaps-4dc03115-9c4b-4cc6-b907-0493166817e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:37:41.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2425" for this suite. • [SLOW TEST:6.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":7,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:37:41.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1694 May 4 23:37:45.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 4 23:37:49.036: INFO: stderr: "I0504 23:37:48.916593 30 log.go:172] (0xc0000e04d0) (0xc000715720) Create stream\nI0504 23:37:48.916677 30 log.go:172] (0xc0000e04d0) (0xc000715720) Stream added, broadcasting: 1\nI0504 23:37:48.919097 30 log.go:172] (0xc0000e04d0) Reply frame received for 1\nI0504 23:37:48.919136 30 log.go:172] (0xc0000e04d0) (0xc000704f00) Create stream\nI0504 23:37:48.919156 30 log.go:172] (0xc0000e04d0) (0xc000704f00) Stream added, broadcasting: 3\nI0504 23:37:48.920274 30 log.go:172] (0xc0000e04d0) Reply frame received for 3\nI0504 23:37:48.920320 30 log.go:172] (0xc0000e04d0) (0xc000705ea0) Create stream\nI0504 23:37:48.920335 30 log.go:172] (0xc0000e04d0) (0xc000705ea0) Stream added, broadcasting: 5\nI0504 23:37:48.921659 30 log.go:172] (0xc0000e04d0) Reply frame received for 5\nI0504 23:37:49.022026 30 log.go:172] (0xc0000e04d0) Data frame received for 5\nI0504 23:37:49.022059 30 log.go:172] (0xc000705ea0) (5) Data frame handling\nI0504 23:37:49.022082 30 log.go:172] (0xc000705ea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0504 23:37:49.027093 30 log.go:172] (0xc0000e04d0) Data frame received for 3\nI0504 23:37:49.027120 30 log.go:172] (0xc000704f00) (3) Data frame handling\nI0504 23:37:49.027147 30 log.go:172] (0xc000704f00) (3) Data frame sent\nI0504 23:37:49.027777 30 log.go:172] (0xc0000e04d0) Data frame received for 5\nI0504 23:37:49.027809 30 log.go:172] (0xc0000e04d0) Data frame received for 3\nI0504 23:37:49.027844 30 log.go:172] (0xc000704f00) (3) Data frame handling\nI0504 23:37:49.027878 30 log.go:172] (0xc000705ea0) (5) Data frame handling\nI0504 23:37:49.030589 30 log.go:172] (0xc0000e04d0) Data frame received for 1\nI0504 23:37:49.030618 30 log.go:172] (0xc000715720) (1) Data frame handling\nI0504 23:37:49.030654 30 log.go:172] (0xc000715720) (1) Data frame sent\nI0504 23:37:49.030681 30 log.go:172] (0xc0000e04d0) (0xc000715720) Stream removed, broadcasting: 1\nI0504 23:37:49.030702 30 log.go:172] (0xc0000e04d0) Go away received\nI0504 23:37:49.031089 30 log.go:172] (0xc0000e04d0) (0xc000715720) Stream removed, broadcasting: 1\nI0504 23:37:49.031113 30 log.go:172] (0xc0000e04d0) (0xc000704f00) Stream removed, broadcasting: 3\nI0504 23:37:49.031125 30 log.go:172] (0xc0000e04d0) (0xc000705ea0) Stream removed, broadcasting: 5\n" May 4 23:37:49.037: INFO: stdout: "iptables" May 4 23:37:49.037: INFO: proxyMode: iptables May 4 23:37:49.054: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 23:37:49.069: INFO: Pod kube-proxy-mode-detector still exists May 4 23:37:51.070: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 23:37:51.074: INFO: Pod kube-proxy-mode-detector still exists May 4 23:37:53.070: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 4 23:37:53.072: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1694 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1694 I0504 23:37:53.162257 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1694, replica count: 3 I0504 23:37:56.212664 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 23:37:59.212925 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 23:37:59.220: INFO: Creating new exec pod May 4 23:38:04.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 4 23:38:04.521: INFO: stderr: "I0504 23:38:04.407317 58 log.go:172] (0xc00003a420) (0xc00041e280) Create stream\nI0504 23:38:04.407379 58 log.go:172] (0xc00003a420) (0xc00041e280) Stream added, broadcasting: 1\nI0504 23:38:04.411094 58 log.go:172] (0xc00003a420) Reply frame received for 1\nI0504 23:38:04.411148 58 log.go:172] (0xc00003a420) (0xc0002fce60) Create stream\nI0504 23:38:04.411162 58 log.go:172] (0xc00003a420) (0xc0002fce60) Stream added, broadcasting: 3\nI0504 23:38:04.412519 58 log.go:172] (0xc00003a420) Reply frame received for 3\nI0504 23:38:04.412578 58 log.go:172] (0xc00003a420) (0xc000139d60) Create stream\nI0504 23:38:04.412602 58 log.go:172] (0xc00003a420) (0xc000139d60) Stream added, broadcasting: 5\nI0504 23:38:04.414169 58 log.go:172] (0xc00003a420) Reply frame received for 5\nI0504 23:38:04.512737 58 log.go:172] (0xc00003a420) Data frame received for 5\nI0504 23:38:04.512778 58 log.go:172] (0xc000139d60) (5) Data frame handling\nI0504 23:38:04.512801 58 log.go:172] (0xc000139d60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0504 23:38:04.512947 58 log.go:172] (0xc00003a420) Data frame received for 5\nI0504 23:38:04.512970 58 log.go:172] (0xc000139d60) (5) Data frame handling\nI0504 23:38:04.512989 58 log.go:172] (0xc000139d60) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0504 23:38:04.513568 58 log.go:172] (0xc00003a420) Data frame received for 5\nI0504 23:38:04.513596 58 log.go:172] (0xc000139d60) (5) Data frame handling\nI0504 23:38:04.513616 58 log.go:172] (0xc00003a420) Data frame received for 3\nI0504 23:38:04.513629 58 log.go:172] (0xc0002fce60) (3) Data frame handling\nI0504 23:38:04.515702 58 log.go:172] (0xc00003a420) Data frame received for 1\nI0504 23:38:04.515738 58 log.go:172] (0xc00041e280) (1) Data frame handling\nI0504 23:38:04.515754 58 log.go:172] (0xc00041e280) (1) Data frame sent\nI0504 23:38:04.515785 58 log.go:172] (0xc00003a420) (0xc00041e280) Stream removed, broadcasting: 1\nI0504 23:38:04.515807 58 log.go:172] (0xc00003a420) Go away received\nI0504 23:38:04.516183 58 log.go:172] (0xc00003a420) (0xc00041e280) Stream removed, broadcasting: 1\nI0504 23:38:04.516206 58 log.go:172] (0xc00003a420) (0xc0002fce60) Stream removed, broadcasting: 3\nI0504 23:38:04.516215 58 log.go:172] (0xc00003a420) (0xc000139d60) Stream removed, broadcasting: 5\n" May 4 23:38:04.521: INFO: stdout: "" May 4 23:38:04.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c nc -zv -t -w 2 10.100.83.3 80' May 4 23:38:04.772: INFO: stderr: "I0504 23:38:04.690738 78 log.go:172] (0xc000966160) (0xc0005a8960) Create stream\nI0504 23:38:04.690802 78 log.go:172] (0xc000966160) (0xc0005a8960) Stream added, broadcasting: 1\nI0504 23:38:04.694102 78 log.go:172] (0xc000966160) Reply frame received for 1\nI0504 23:38:04.694175 78 log.go:172] (0xc000966160) (0xc00051c140) Create stream\nI0504 23:38:04.694211 78 log.go:172] (0xc000966160) (0xc00051c140) Stream added, broadcasting: 3\nI0504 23:38:04.695110 78 log.go:172] (0xc000966160) Reply frame received for 3\nI0504 23:38:04.695175 78 log.go:172] (0xc000966160) (0xc00051d0e0) Create stream\nI0504 23:38:04.695207 78 log.go:172] (0xc000966160) (0xc00051d0e0) Stream added, broadcasting: 5\nI0504 23:38:04.696202 78 log.go:172] (0xc000966160) Reply frame received for 5\nI0504 23:38:04.763841 78 log.go:172] (0xc000966160) Data frame received for 3\nI0504 23:38:04.763889 78 log.go:172] (0xc00051c140) (3) Data frame handling\nI0504 23:38:04.764038 78 log.go:172] (0xc000966160) Data frame received for 5\nI0504 23:38:04.764074 78 log.go:172] (0xc00051d0e0) (5) Data frame handling\nI0504 23:38:04.764104 78 log.go:172] (0xc00051d0e0) (5) Data frame sent\nI0504 23:38:04.764122 78 log.go:172] (0xc000966160) Data frame received for 5\nI0504 23:38:04.764137 78 log.go:172] (0xc00051d0e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.83.3 80\nConnection to 10.100.83.3 80 port [tcp/http] succeeded!\nI0504 23:38:04.766193 78 log.go:172] (0xc000966160) Data frame received for 1\nI0504 23:38:04.766230 78 log.go:172] (0xc0005a8960) (1) Data frame handling\nI0504 23:38:04.766258 78 log.go:172] (0xc0005a8960) (1) Data frame sent\nI0504 23:38:04.766294 78 log.go:172] (0xc000966160) (0xc0005a8960) Stream removed, broadcasting: 1\nI0504 23:38:04.766724 78 log.go:172] (0xc000966160) (0xc0005a8960) Stream removed, broadcasting: 1\nI0504 23:38:04.766756 78 log.go:172] (0xc000966160) (0xc00051c140) Stream removed, broadcasting: 3\nI0504 23:38:04.766769 78 log.go:172] (0xc000966160) (0xc00051d0e0) Stream removed, broadcasting: 5\n" May 4 23:38:04.772: INFO: stdout: "" May 4 23:38:04.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.83.3:80/ ; done' May 4 23:38:05.051: INFO: stderr: "I0504 23:38:04.898639 100 log.go:172] (0xc000792370) (0xc0003099a0) Create stream\nI0504 23:38:04.898674 100 log.go:172] (0xc000792370) (0xc0003099a0) Stream added, broadcasting: 1\nI0504 23:38:04.900579 100 log.go:172] (0xc000792370) Reply frame received for 1\nI0504 23:38:04.900615 100 log.go:172] (0xc000792370) (0xc000309f40) Create stream\nI0504 23:38:04.900625 100 log.go:172] (0xc000792370) (0xc000309f40) Stream added, broadcasting: 3\nI0504 23:38:04.901662 100 log.go:172] (0xc000792370) Reply frame received for 3\nI0504 23:38:04.901688 100 log.go:172] (0xc000792370) (0xc000694960) Create stream\nI0504 23:38:04.901721 100 log.go:172] (0xc000792370) (0xc000694960) Stream added, broadcasting: 5\nI0504 23:38:04.902787 100 log.go:172] (0xc000792370) Reply frame received for 5\nI0504 23:38:04.964568 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.964586 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.964614 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.964648 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.964660 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.964676 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.968319 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.968351 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.968382 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.968608 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.968636 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.968655 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.968674 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.968685 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.968696 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.972077 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.972090 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.972096 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.972378 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.972400 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.972420 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.972450 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.972472 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.972501 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.975847 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.975867 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.975884 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.976159 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.976182 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.976192 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.976206 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.976213 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.976221 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.980728 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.980749 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.980760 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.981415 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.981460 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.981479 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.981504 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.981529 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.981548 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.984998 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.985023 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.985057 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.985762 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.985799 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.985820 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.985855 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.985866 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.985883 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.990593 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.990606 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.990613 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.990874 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.990916 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.990930 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.990947 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.990958 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.990967 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.994668 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.994690 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.994709 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:04.995572 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:04.995613 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:04.995628 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:04.995646 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:04.995661 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:04.995683 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.002435 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.002458 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.002492 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.003322 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.003343 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.003358 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.003399 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.003427 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.003449 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.007828 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.007859 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.007892 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.008221 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.008253 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.008268 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.008293 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.008305 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.008318 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.012607 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.012629 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.012650 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.013338 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.013359 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.013376 100 log.go:172] (0xc000694960) (5) Data frame sent\nI0504 23:38:05.013387 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.013401 100 log.go:172] (0xc000694960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.013423 100 log.go:172] (0xc000694960) (5) Data frame sent\nI0504 23:38:05.013491 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.013512 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.013525 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.017584 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.017606 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.017619 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.018115 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.018137 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.018161 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.018177 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.018188 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.018205 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.022486 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.022507 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.022525 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.023338 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.023365 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.023392 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.023420 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.023432 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.023447 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.027636 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.027649 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.027657 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.028500 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.028530 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.028568 100 log.go:172] (0xc000694960) (5) Data frame sent\nI0504 23:38:05.028592 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.028602 100 log.go:172] (0xc000694960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.028665 100 log.go:172] (0xc000694960) (5) Data frame sent\nI0504 23:38:05.028734 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.028765 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.028791 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.032956 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.032985 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.032998 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.033804 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.033832 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.033851 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.033900 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.033912 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.033921 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.037846 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.037866 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.037882 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.038264 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.038276 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.038284 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.038294 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.038299 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.038304 100 log.go:172] (0xc000694960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.044412 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.044431 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.044449 100 log.go:172] (0xc000309f40) (3) Data frame sent\nI0504 23:38:05.044900 100 log.go:172] (0xc000792370) Data frame received for 5\nI0504 23:38:05.044951 100 log.go:172] (0xc000694960) (5) Data frame handling\nI0504 23:38:05.045339 100 log.go:172] (0xc000792370) Data frame received for 3\nI0504 23:38:05.045353 100 log.go:172] (0xc000309f40) (3) Data frame handling\nI0504 23:38:05.046696 100 log.go:172] (0xc000792370) Data frame received for 1\nI0504 23:38:05.046718 100 log.go:172] (0xc0003099a0) (1) Data frame handling\nI0504 23:38:05.046732 100 log.go:172] (0xc0003099a0) (1) Data frame sent\nI0504 23:38:05.046751 100 log.go:172] (0xc000792370) (0xc0003099a0) Stream removed, broadcasting: 1\nI0504 23:38:05.046772 100 log.go:172] (0xc000792370) Go away received\nI0504 23:38:05.047097 100 log.go:172] (0xc000792370) (0xc0003099a0) Stream removed, broadcasting: 1\nI0504 23:38:05.047122 100 log.go:172] (0xc000792370) (0xc000309f40) Stream removed, broadcasting: 3\nI0504 23:38:05.047132 100 log.go:172] (0xc000792370) (0xc000694960) Stream removed, broadcasting: 5\n" May 4 23:38:05.051: INFO: stdout: "\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm\naffinity-clusterip-timeout-7z8qm" May 4 23:38:05.052: INFO: Received response from host: May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Received response from host: affinity-clusterip-timeout-7z8qm May 4 23:38:05.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.100.83.3:80/' May 4 23:38:05.243: INFO: stderr: "I0504 23:38:05.172961 122 log.go:172] (0xc000a05130) (0xc000afe640) Create stream\nI0504 23:38:05.173014 122 log.go:172] (0xc000a05130) (0xc000afe640) Stream added, broadcasting: 1\nI0504 23:38:05.177959 122 log.go:172] (0xc000a05130) Reply frame received for 1\nI0504 23:38:05.178003 122 log.go:172] (0xc000a05130) (0xc000afe000) Create stream\nI0504 23:38:05.178024 122 log.go:172] (0xc000a05130) (0xc000afe000) Stream added, broadcasting: 3\nI0504 23:38:05.178961 122 log.go:172] (0xc000a05130) Reply frame received for 3\nI0504 23:38:05.179002 122 log.go:172] (0xc000a05130) (0xc00023b4a0) Create stream\nI0504 23:38:05.179017 122 log.go:172] (0xc000a05130) (0xc00023b4a0) Stream added, broadcasting: 5\nI0504 23:38:05.179887 122 log.go:172] (0xc000a05130) Reply frame received for 5\nI0504 23:38:05.233952 122 log.go:172] (0xc000a05130) Data frame received for 5\nI0504 23:38:05.233976 122 log.go:172] (0xc00023b4a0) (5) Data frame handling\nI0504 23:38:05.233987 122 log.go:172] (0xc00023b4a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:05.235958 122 log.go:172] (0xc000a05130) Data frame received for 3\nI0504 23:38:05.235975 122 log.go:172] (0xc000afe000) (3) Data frame handling\nI0504 23:38:05.235989 122 log.go:172] (0xc000afe000) (3) Data frame sent\nI0504 23:38:05.236439 122 log.go:172] (0xc000a05130) Data frame received for 3\nI0504 23:38:05.236466 122 log.go:172] (0xc000afe000) (3) Data frame handling\nI0504 23:38:05.236488 122 log.go:172] (0xc000a05130) Data frame received for 5\nI0504 23:38:05.236494 122 log.go:172] (0xc00023b4a0) (5) Data frame handling\nI0504 23:38:05.238404 122 log.go:172] (0xc000a05130) Data frame received for 1\nI0504 23:38:05.238423 122 log.go:172] (0xc000afe640) (1) Data frame handling\nI0504 23:38:05.238435 122 log.go:172] (0xc000afe640) (1) Data frame sent\nI0504 23:38:05.238449 122 log.go:172] (0xc000a05130) (0xc000afe640) Stream removed, broadcasting: 1\nI0504 23:38:05.238464 122 log.go:172] (0xc000a05130) Go away received\nI0504 23:38:05.238696 122 log.go:172] (0xc000a05130) (0xc000afe640) Stream removed, broadcasting: 1\nI0504 23:38:05.238716 122 log.go:172] (0xc000a05130) (0xc000afe000) Stream removed, broadcasting: 3\nI0504 23:38:05.238722 122 log.go:172] (0xc000a05130) (0xc00023b4a0) Stream removed, broadcasting: 5\n" May 4 23:38:05.243: INFO: stdout: "affinity-clusterip-timeout-7z8qm" May 4 23:38:20.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.100.83.3:80/' May 4 23:38:20.481: INFO: stderr: "I0504 23:38:20.383236 145 log.go:172] (0xc000b0f290) (0xc000656320) Create stream\nI0504 23:38:20.383296 145 log.go:172] (0xc000b0f290) (0xc000656320) Stream added, broadcasting: 1\nI0504 23:38:20.384847 145 log.go:172] (0xc000b0f290) Reply frame received for 1\nI0504 23:38:20.384886 145 log.go:172] (0xc000b0f290) (0xc000160000) Create stream\nI0504 23:38:20.384901 145 log.go:172] (0xc000b0f290) (0xc000160000) Stream added, broadcasting: 3\nI0504 23:38:20.386298 145 log.go:172] (0xc000b0f290) Reply frame received for 3\nI0504 23:38:20.386362 145 log.go:172] (0xc000b0f290) (0xc000160780) Create stream\nI0504 23:38:20.386381 145 log.go:172] (0xc000b0f290) (0xc000160780) Stream added, broadcasting: 5\nI0504 23:38:20.387182 145 log.go:172] (0xc000b0f290) Reply frame received for 5\nI0504 23:38:20.473611 145 log.go:172] (0xc000b0f290) Data frame received for 5\nI0504 23:38:20.473658 145 log.go:172] (0xc000160780) (5) Data frame handling\nI0504 23:38:20.473692 145 log.go:172] (0xc000160780) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:20.474973 145 log.go:172] (0xc000b0f290) Data frame received for 3\nI0504 23:38:20.475002 145 log.go:172] (0xc000160000) (3) Data frame handling\nI0504 23:38:20.475025 145 log.go:172] (0xc000160000) (3) Data frame sent\nI0504 23:38:20.475299 145 log.go:172] (0xc000b0f290) Data frame received for 3\nI0504 23:38:20.475331 145 log.go:172] (0xc000160000) (3) Data frame handling\nI0504 23:38:20.475576 145 log.go:172] (0xc000b0f290) Data frame received for 5\nI0504 23:38:20.475605 145 log.go:172] (0xc000160780) (5) Data frame handling\nI0504 23:38:20.477294 145 log.go:172] (0xc000b0f290) Data frame received for 1\nI0504 23:38:20.477316 145 log.go:172] (0xc000656320) (1) Data frame handling\nI0504 23:38:20.477327 145 log.go:172] (0xc000656320) (1) Data frame sent\nI0504 23:38:20.477343 145 log.go:172] (0xc000b0f290) (0xc000656320) Stream removed, broadcasting: 1\nI0504 23:38:20.477365 145 log.go:172] (0xc000b0f290) Go away received\nI0504 23:38:20.477769 145 log.go:172] (0xc000b0f290) (0xc000656320) Stream removed, broadcasting: 1\nI0504 23:38:20.477794 145 log.go:172] (0xc000b0f290) (0xc000160000) Stream removed, broadcasting: 3\nI0504 23:38:20.477809 145 log.go:172] (0xc000b0f290) (0xc000160780) Stream removed, broadcasting: 5\n" May 4 23:38:20.481: INFO: stdout: "affinity-clusterip-timeout-7z8qm" May 4 23:38:35.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1694 execpod-affinityvzgnn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.100.83.3:80/' May 4 23:38:35.716: INFO: stderr: "I0504 23:38:35.618316 165 log.go:172] (0xc000af51e0) (0xc00083fe00) Create stream\nI0504 23:38:35.618378 165 log.go:172] (0xc000af51e0) (0xc00083fe00) Stream added, broadcasting: 1\nI0504 23:38:35.623954 165 log.go:172] (0xc000af51e0) Reply frame received for 1\nI0504 23:38:35.623994 165 log.go:172] (0xc000af51e0) (0xc000616460) Create stream\nI0504 23:38:35.624006 165 log.go:172] (0xc000af51e0) (0xc000616460) Stream added, broadcasting: 3\nI0504 23:38:35.625454 165 log.go:172] (0xc000af51e0) Reply frame received for 3\nI0504 23:38:35.625506 165 log.go:172] (0xc000af51e0) (0xc000540fa0) Create stream\nI0504 23:38:35.625525 165 log.go:172] (0xc000af51e0) (0xc000540fa0) Stream added, broadcasting: 5\nI0504 23:38:35.626443 165 log.go:172] (0xc000af51e0) Reply frame received for 5\nI0504 23:38:35.700883 165 log.go:172] (0xc000af51e0) Data frame received for 5\nI0504 23:38:35.700913 165 log.go:172] (0xc000540fa0) (5) Data frame handling\nI0504 23:38:35.700934 165 log.go:172] (0xc000540fa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.83.3:80/\nI0504 23:38:35.708353 165 log.go:172] (0xc000af51e0) Data frame received for 3\nI0504 23:38:35.708386 165 log.go:172] (0xc000616460) (3) Data frame handling\nI0504 23:38:35.708406 165 log.go:172] (0xc000616460) (3) Data frame sent\nI0504 23:38:35.709475 165 log.go:172] (0xc000af51e0) Data frame received for 5\nI0504 23:38:35.709558 165 log.go:172] (0xc000540fa0) (5) Data frame handling\nI0504 23:38:35.709589 165 log.go:172] (0xc000af51e0) Data frame received for 3\nI0504 23:38:35.709598 165 log.go:172] (0xc000616460) (3) Data frame handling\nI0504 23:38:35.710962 165 log.go:172] (0xc000af51e0) Data frame received for 1\nI0504 23:38:35.710995 165 log.go:172] (0xc00083fe00) (1) Data frame handling\nI0504 23:38:35.711010 165 log.go:172] (0xc00083fe00) (1) Data frame sent\nI0504 23:38:35.711026 165 log.go:172] (0xc000af51e0) (0xc00083fe00) Stream removed, broadcasting: 1\nI0504 23:38:35.711044 165 log.go:172] (0xc000af51e0) Go away received\nI0504 23:38:35.711432 165 log.go:172] (0xc000af51e0) (0xc00083fe00) Stream removed, broadcasting: 1\nI0504 23:38:35.711455 165 log.go:172] (0xc000af51e0) (0xc000616460) Stream removed, broadcasting: 3\nI0504 23:38:35.711469 165 log.go:172] (0xc000af51e0) (0xc000540fa0) Stream removed, broadcasting: 5\n" May 4 23:38:35.716: INFO: stdout: "affinity-clusterip-timeout-xpczt" May 4 23:38:35.716: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1694, will wait for the garbage collector to delete the pods May 4 23:38:35.810: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.257539ms May 4 23:38:36.211: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.215006ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:38:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1694" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:64.101 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":2,"skipped":9,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:38:45.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 4 23:38:45.499: INFO: Waiting up to 5m0s for pod "pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0" in namespace "emptydir-3124" to be "Succeeded or Failed" May 4 23:38:45.538: INFO: Pod "pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 39.473656ms May 4 23:38:47.543: INFO: Pod "pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043706495s May 4 23:38:49.547: INFO: Pod "pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048210238s STEP: Saw pod success May 4 23:38:49.547: INFO: Pod "pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0" satisfied condition "Succeeded or Failed" May 4 23:38:49.551: INFO: Trying to get logs from node latest-worker pod pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0 container test-container: STEP: delete the pod May 4 23:38:49.584: INFO: Waiting for pod pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0 to disappear May 4 23:38:49.617: INFO: Pod pod-004d9e2c-64cb-4d4b-b92e-cd560f9b4ff0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:38:49.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3124" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":18,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:38:49.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 23:38:49.687: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 23:38:49.698: INFO: Waiting for terminating namespaces to be deleted... May 4 23:38:49.701: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 4 23:38:49.705: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 4 23:38:49.705: INFO: Container kindnet-cni ready: true, restart count 0 May 4 23:38:49.705: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 4 23:38:49.705: INFO: Container kube-proxy ready: true, restart count 0 May 4 23:38:49.705: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 4 23:38:49.710: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 4 23:38:49.710: INFO: Container kindnet-cni ready: true, restart count 0 May 4 23:38:49.710: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 4 23:38:49.710: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-15eb8507-a8f1-47ba-88d9-fbc3601b3220 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-15eb8507-a8f1-47ba-88d9-fbc3601b3220 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-15eb8507-a8f1-47ba-88d9-fbc3601b3220 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:43:58.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6296" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.416 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":4,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:43:58.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-ac170654-3afa-4774-87b8-1f1a33158dcc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:43:58.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3546" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":5,"skipped":71,"failed":0} ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:43:58.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:43:58.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8115" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":6,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:43:58.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d6097371-6781-46e2-9ce3-4645d2a1ede6 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d6097371-6781-46e2-9ce3-4645d2a1ede6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:45:26.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-91" for this suite. • [SLOW TEST:88.663 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":146,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:45:26.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:45:30.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4800" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:45:30.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-576db32c-df41-4da2-a6a6-8ac0e6213248 STEP: Creating a pod to test consume configMaps May 4 23:45:31.100: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a" in namespace "projected-8079" to be "Succeeded or Failed" May 4 23:45:31.125: INFO: Pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.73178ms May 4 23:45:33.267: INFO: Pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166908831s May 4 23:45:35.271: INFO: Pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171103954s May 4 23:45:37.275: INFO: Pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175365123s STEP: Saw pod success May 4 23:45:37.275: INFO: Pod "pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a" satisfied condition "Succeeded or Failed" May 4 23:45:37.278: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a container projected-configmap-volume-test: STEP: delete the pod May 4 23:45:37.358: INFO: Waiting for pod pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a to disappear May 4 23:45:37.367: INFO: Pod pod-projected-configmaps-0fb1eb86-eae6-4551-ac25-c30606f1d06a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:45:37.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8079" for this suite. • [SLOW TEST:6.412 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":207,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:45:37.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:45:37.441: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:45:41.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5844" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:45:41.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-4d51ca5a-6fd8-43d5-b1cc-be0eca2c5c8e STEP: Creating a pod to test consume configMaps May 4 23:45:41.689: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9" in namespace "projected-4974" to be "Succeeded or Failed" May 4 23:45:41.703: INFO: Pod "pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035691ms May 4 23:45:43.759: INFO: Pod "pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070288832s May 4 23:45:45.812: INFO: Pod "pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122713758s STEP: Saw pod success May 4 23:45:45.812: INFO: Pod "pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9" satisfied condition "Succeeded or Failed" May 4 23:45:45.815: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9 container projected-configmap-volume-test: STEP: delete the pod May 4 23:45:45.974: INFO: Waiting for pod pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9 to disappear May 4 23:45:46.005: INFO: Pod pod-projected-configmaps-59a38a42-e31d-4b1f-9dc8-82369aa3a8f9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:45:46.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4974" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":245,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:45:46.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:02.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4841" for this suite. • [SLOW TEST:16.191 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":12,"skipped":256,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:02.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 23:46:02.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 23:46:04.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232763, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 23:46:07.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232763, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724232762, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 23:46:10.011: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:46:10.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6449-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:11.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3781" for this suite. STEP: Destroying namespace "webhook-3781-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.124 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":13,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:11.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:28.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3714" for this suite. • [SLOW TEST:16.681 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":14,"skipped":289,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:28.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:35.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8133" for this suite. • [SLOW TEST:7.384 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":15,"skipped":289,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:35.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3082.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3082.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3082.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3082.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3082.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3082.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 23:46:41.608: INFO: DNS probes using dns-3082/dns-test-8a3437d4-6a5a-44b5-ae72-0208873360b9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:41.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3082" for this suite. • [SLOW TEST:6.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":16,"skipped":289,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:41.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-cd7b8adb-a788-44ff-8b33-0a06f8c1a0b5 STEP: Creating a pod to test consume secrets May 4 23:46:42.198: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9" in namespace "projected-3096" to be "Succeeded or Failed" May 4 23:46:42.258: INFO: Pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 60.466192ms May 4 23:46:44.321: INFO: Pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123711388s May 4 23:46:46.326: INFO: Pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9": Phase="Running", Reason="", readiness=true. Elapsed: 4.128425044s May 4 23:46:48.330: INFO: Pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132825002s STEP: Saw pod success May 4 23:46:48.330: INFO: Pod "pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9" satisfied condition "Succeeded or Failed" May 4 23:46:48.334: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9 container projected-secret-volume-test: STEP: delete the pod May 4 23:46:48.373: INFO: Waiting for pod pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9 to disappear May 4 23:46:48.404: INFO: Pod pod-projected-secrets-3ee3665f-387f-4897-9031-806c05d0d9d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:46:48.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3096" for this suite. • [SLOW TEST:6.678 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":304,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:46:48.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:47:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2392" for this suite. • [SLOW TEST:29.831 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":309,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:47:18.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 4 23:47:18.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7852' May 4 23:47:18.638: INFO: stderr: "" May 4 23:47:18.638: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 23:47:18.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:18.790: INFO: stderr: "" May 4 23:47:18.790: INFO: stdout: "update-demo-nautilus-lsrz4 update-demo-nautilus-vwz8c " May 4 23:47:18.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:18.908: INFO: stderr: "" May 4 23:47:18.909: INFO: stdout: "" May 4 23:47:18.909: INFO: update-demo-nautilus-lsrz4 is created but not running May 4 23:47:23.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:24.028: INFO: stderr: "" May 4 23:47:24.028: INFO: stdout: "update-demo-nautilus-lsrz4 update-demo-nautilus-vwz8c " May 4 23:47:24.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:24.145: INFO: stderr: "" May 4 23:47:24.146: INFO: stdout: "true" May 4 23:47:24.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:24.270: INFO: stderr: "" May 4 23:47:24.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 23:47:24.270: INFO: validating pod update-demo-nautilus-lsrz4 May 4 23:47:24.274: INFO: got data: { "image": "nautilus.jpg" } May 4 23:47:24.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 23:47:24.274: INFO: update-demo-nautilus-lsrz4 is verified up and running May 4 23:47:24.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwz8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:24.374: INFO: stderr: "" May 4 23:47:24.374: INFO: stdout: "true" May 4 23:47:24.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwz8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:24.476: INFO: stderr: "" May 4 23:47:24.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 23:47:24.476: INFO: validating pod update-demo-nautilus-vwz8c May 4 23:47:24.480: INFO: got data: { "image": "nautilus.jpg" } May 4 23:47:24.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 23:47:24.480: INFO: update-demo-nautilus-vwz8c is verified up and running STEP: scaling down the replication controller May 4 23:47:24.483: INFO: scanned /root for discovery docs: May 4 23:47:24.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7852' May 4 23:47:25.621: INFO: stderr: "" May 4 23:47:25.621: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 23:47:25.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:25.728: INFO: stderr: "" May 4 23:47:25.728: INFO: stdout: "update-demo-nautilus-lsrz4 update-demo-nautilus-vwz8c " STEP: Replicas for name=update-demo: expected=1 actual=2 May 4 23:47:30.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:30.848: INFO: stderr: "" May 4 23:47:30.848: INFO: stdout: "update-demo-nautilus-lsrz4 update-demo-nautilus-vwz8c " STEP: Replicas for name=update-demo: expected=1 actual=2 May 4 23:47:35.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:35.953: INFO: stderr: "" May 4 23:47:35.953: INFO: stdout: "update-demo-nautilus-lsrz4 " May 4 23:47:35.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:36.038: INFO: stderr: "" May 4 23:47:36.038: INFO: stdout: "true" May 4 23:47:36.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:36.124: INFO: stderr: "" May 4 23:47:36.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 23:47:36.125: INFO: validating pod update-demo-nautilus-lsrz4 May 4 23:47:36.128: INFO: got data: { "image": "nautilus.jpg" } May 4 23:47:36.128: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 23:47:36.128: INFO: update-demo-nautilus-lsrz4 is verified up and running STEP: scaling up the replication controller May 4 23:47:36.130: INFO: scanned /root for discovery docs: May 4 23:47:36.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7852' May 4 23:47:37.251: INFO: stderr: "" May 4 23:47:37.251: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 4 23:47:37.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:37.396: INFO: stderr: "" May 4 23:47:37.396: INFO: stdout: "update-demo-nautilus-7nwtd update-demo-nautilus-lsrz4 " May 4 23:47:37.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nwtd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:37.505: INFO: stderr: "" May 4 23:47:37.505: INFO: stdout: "" May 4 23:47:37.505: INFO: update-demo-nautilus-7nwtd is created but not running May 4 23:47:42.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7852' May 4 23:47:42.633: INFO: stderr: "" May 4 23:47:42.633: INFO: stdout: "update-demo-nautilus-7nwtd update-demo-nautilus-lsrz4 " May 4 23:47:42.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nwtd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:42.721: INFO: stderr: "" May 4 23:47:42.721: INFO: stdout: "true" May 4 23:47:42.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nwtd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:42.833: INFO: stderr: "" May 4 23:47:42.833: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 23:47:42.833: INFO: validating pod update-demo-nautilus-7nwtd May 4 23:47:42.838: INFO: got data: { "image": "nautilus.jpg" } May 4 23:47:42.838: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 23:47:42.838: INFO: update-demo-nautilus-7nwtd is verified up and running May 4 23:47:42.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:42.934: INFO: stderr: "" May 4 23:47:42.934: INFO: stdout: "true" May 4 23:47:42.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsrz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7852' May 4 23:47:43.028: INFO: stderr: "" May 4 23:47:43.028: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 4 23:47:43.028: INFO: validating pod update-demo-nautilus-lsrz4 May 4 23:47:43.032: INFO: got data: { "image": "nautilus.jpg" } May 4 23:47:43.032: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 4 23:47:43.032: INFO: update-demo-nautilus-lsrz4 is verified up and running STEP: using delete to clean up resources May 4 23:47:43.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7852' May 4 23:47:43.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 23:47:43.138: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 4 23:47:43.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7852' May 4 23:47:43.250: INFO: stderr: "No resources found in kubectl-7852 namespace.\n" May 4 23:47:43.250: INFO: stdout: "" May 4 23:47:43.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7852 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 23:47:43.367: INFO: stderr: "" May 4 23:47:43.367: INFO: stdout: "update-demo-nautilus-7nwtd\nupdate-demo-nautilus-lsrz4\n" May 4 23:47:43.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7852' May 4 23:47:43.981: INFO: stderr: "No resources found in kubectl-7852 namespace.\n" May 4 23:47:43.981: INFO: stdout: "" May 4 23:47:43.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7852 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 4 23:47:44.133: INFO: stderr: "" May 4 23:47:44.133: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:47:44.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7852" for this suite. • [SLOW TEST:25.897 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":19,"skipped":325,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:47:44.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:47:44.422: INFO: Creating ReplicaSet my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1 May 4 23:47:44.577: INFO: Pod name my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1: Found 0 pods out of 1 May 4 23:47:49.580: INFO: Pod name my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1: Found 1 pods out of 1 May 4 23:47:49.580: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1" is running May 4 23:47:49.583: INFO: Pod "my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1-4nmfx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 23:47:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 23:47:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 23:47:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 23:47:44 +0000 UTC Reason: Message:}]) May 4 23:47:49.583: INFO: Trying to dial the pod May 4 23:47:54.592: INFO: Controller my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1: Got expected result from replica 1 [my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1-4nmfx]: "my-hostname-basic-8c36ef00-7c92-404c-b742-5956d65147d1-4nmfx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:47:54.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3443" for this suite. • [SLOW TEST:10.456 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":20,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:47:54.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-038abe1a-c38a-4af8-9908-a68a03c99d9e STEP: Creating a pod to test consume secrets May 4 23:47:54.735: INFO: Waiting up to 5m0s for pod "pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328" in namespace "secrets-5965" to be "Succeeded or Failed" May 4 23:47:54.774: INFO: Pod "pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328": Phase="Pending", Reason="", readiness=false. Elapsed: 39.707257ms May 4 23:47:56.788: INFO: Pod "pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053138244s May 4 23:47:58.791: INFO: Pod "pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056634027s STEP: Saw pod success May 4 23:47:58.791: INFO: Pod "pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328" satisfied condition "Succeeded or Failed" May 4 23:47:58.794: INFO: Trying to get logs from node latest-worker pod pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328 container secret-volume-test: STEP: delete the pod May 4 23:47:58.976: INFO: Waiting for pod pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328 to disappear May 4 23:47:59.165: INFO: Pod pod-secrets-d8dc4d19-2892-4bd9-b5c0-a619567ec328 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:47:59.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5965" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":21,"skipped":355,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:47:59.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 4 23:47:59.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519" in namespace "downward-api-1833" to be "Succeeded or Failed" May 4 23:47:59.711: INFO: Pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519": Phase="Pending", Reason="", readiness=false. Elapsed: 94.834362ms May 4 23:48:01.715: INFO: Pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099254294s May 4 23:48:03.719: INFO: Pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519": Phase="Running", Reason="", readiness=true. Elapsed: 4.10375471s May 4 23:48:05.724: INFO: Pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108049866s STEP: Saw pod success May 4 23:48:05.724: INFO: Pod "downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519" satisfied condition "Succeeded or Failed" May 4 23:48:05.726: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519 container client-container: STEP: delete the pod May 4 23:48:05.796: INFO: Waiting for pod downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519 to disappear May 4 23:48:05.809: INFO: Pod downwardapi-volume-743cb35c-ff44-4281-b12a-184d577d8519 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:48:05.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1833" for this suite. • [SLOW TEST:6.378 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:48:05.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:48:06.019: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Pending, waiting for it to be Running (with Ready = true) May 4 23:48:08.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Pending, waiting for it to be Running (with Ready = true) May 4 23:48:10.024: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:12.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:14.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:16.052: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:18.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:20.024: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:22.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:24.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = false) May 4 23:48:26.023: INFO: The status of Pod test-webserver-5e367510-6b0f-4f1a-b519-02349d15c430 is Running (Ready = true) May 4 23:48:26.026: INFO: Container started at 2020-05-04 23:48:08 +0000 UTC, pod became ready at 2020-05-04 23:48:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:48:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7796" for this suite. • [SLOW TEST:20.216 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":390,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:48:26.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 4 23:48:26.644: INFO: created pod pod-service-account-defaultsa May 4 23:48:26.644: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 4 23:48:26.672: INFO: created pod pod-service-account-mountsa May 4 23:48:26.672: INFO: pod pod-service-account-mountsa service account token volume mount: true May 4 23:48:26.686: INFO: created pod pod-service-account-nomountsa May 4 23:48:26.686: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 4 23:48:26.710: INFO: created pod pod-service-account-defaultsa-mountspec May 4 23:48:26.710: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 4 23:48:26.759: INFO: created pod pod-service-account-mountsa-mountspec May 4 23:48:26.759: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 4 23:48:26.785: INFO: created pod pod-service-account-nomountsa-mountspec May 4 23:48:26.785: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 4 23:48:26.822: INFO: created pod pod-service-account-defaultsa-nomountspec May 4 23:48:26.822: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 4 23:48:27.053: INFO: created pod pod-service-account-mountsa-nomountspec May 4 23:48:27.053: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 4 23:48:27.341: INFO: created pod pod-service-account-nomountsa-nomountspec May 4 23:48:27.341: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:48:27.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4918" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":24,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:48:27.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 4 23:48:27.691: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:48:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3111" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":25,"skipped":430,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:48:27.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:50:27.868: INFO: Deleting pod "var-expansion-af6335c5-ef6a-41ef-856b-4713825938f6" in namespace "var-expansion-8979" May 4 23:50:27.873: INFO: Wait up to 5m0s for pod "var-expansion-af6335c5-ef6a-41ef-856b-4713825938f6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:50:31.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8979" for this suite. • [SLOW TEST:124.102 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":26,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:50:31.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 4 23:50:31.955: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix551334670/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:50:32.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9673" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":27,"skipped":451,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:50:32.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 4 23:50:32.210: INFO: Waiting up to 5m0s for pod "var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904" in namespace "var-expansion-3544" to be "Succeeded or Failed" May 4 23:50:32.214: INFO: Pod "var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902414ms May 4 23:50:34.219: INFO: Pod "var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008810925s May 4 23:50:36.223: INFO: Pod "var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013405874s STEP: Saw pod success May 4 23:50:36.223: INFO: Pod "var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904" satisfied condition "Succeeded or Failed" May 4 23:50:36.226: INFO: Trying to get logs from node latest-worker pod var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904 container dapi-container: STEP: delete the pod May 4 23:50:36.279: INFO: Waiting for pod var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904 to disappear May 4 23:50:36.286: INFO: Pod var-expansion-ff5c550f-7262-46c9-b452-b9207c4aa904 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:50:36.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3544" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:50:36.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-v7f6 STEP: Creating a pod to test atomic-volume-subpath May 4 23:50:36.750: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v7f6" in namespace "subpath-4614" to be "Succeeded or Failed" May 4 23:50:36.765: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.309764ms May 4 23:50:38.773: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846082s May 4 23:50:40.777: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 4.027789118s May 4 23:50:42.782: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 6.032018932s May 4 23:50:44.786: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 8.036467026s May 4 23:50:46.790: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 10.040780436s May 4 23:50:48.795: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 12.045456155s May 4 23:50:50.799: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 14.049704708s May 4 23:50:52.804: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 16.054327768s May 4 23:50:54.808: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 18.058371554s May 4 23:50:56.812: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 20.062792578s May 4 23:50:58.817: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Running", Reason="", readiness=true. Elapsed: 22.067751047s May 4 23:51:00.822: INFO: Pod "pod-subpath-test-configmap-v7f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072056361s STEP: Saw pod success May 4 23:51:00.822: INFO: Pod "pod-subpath-test-configmap-v7f6" satisfied condition "Succeeded or Failed" May 4 23:51:00.825: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-v7f6 container test-container-subpath-configmap-v7f6: STEP: delete the pod May 4 23:51:00.852: INFO: Waiting for pod pod-subpath-test-configmap-v7f6 to disappear May 4 23:51:00.856: INFO: Pod pod-subpath-test-configmap-v7f6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-v7f6 May 4 23:51:00.856: INFO: Deleting pod "pod-subpath-test-configmap-v7f6" in namespace "subpath-4614" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:00.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4614" for this suite. • [SLOW TEST:24.535 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":29,"skipped":480,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:00.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-463 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 23:51:00.930: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 23:51:01.004: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 23:51:03.007: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 23:51:05.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:07.007: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:09.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:11.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:13.007: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:15.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:17.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:19.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:21.008: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 23:51:23.008: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 23:51:23.015: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 23:51:27.041: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.25:8080/dial?request=hostname&protocol=udp&host=10.244.1.24&port=8081&tries=1'] Namespace:pod-network-test-463 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 23:51:27.041: INFO: >>> kubeConfig: /root/.kube/config I0504 23:51:27.077340 7 log.go:172] (0xc002b5c420) (0xc002e0c3c0) Create stream I0504 23:51:27.077371 7 log.go:172] (0xc002b5c420) (0xc002e0c3c0) Stream added, broadcasting: 1 I0504 23:51:27.080250 7 log.go:172] (0xc002b5c420) Reply frame received for 1 I0504 23:51:27.080312 7 log.go:172] (0xc002b5c420) (0xc002cc9d60) Create stream I0504 23:51:27.080331 7 log.go:172] (0xc002b5c420) (0xc002cc9d60) Stream added, broadcasting: 3 I0504 23:51:27.081550 7 log.go:172] (0xc002b5c420) Reply frame received for 3 I0504 23:51:27.081593 7 log.go:172] (0xc002b5c420) (0xc00037c8c0) Create stream I0504 23:51:27.081603 7 log.go:172] (0xc002b5c420) (0xc00037c8c0) Stream added, broadcasting: 5 I0504 23:51:27.082648 7 log.go:172] (0xc002b5c420) Reply frame received for 5 I0504 23:51:27.188586 7 log.go:172] (0xc002b5c420) Data frame received for 3 I0504 23:51:27.188617 7 log.go:172] (0xc002cc9d60) (3) Data frame handling I0504 23:51:27.188629 7 log.go:172] (0xc002cc9d60) (3) Data frame sent I0504 23:51:27.190022 7 log.go:172] (0xc002b5c420) Data frame received for 5 I0504 23:51:27.190071 7 log.go:172] (0xc00037c8c0) (5) Data frame handling I0504 23:51:27.190103 7 log.go:172] (0xc002b5c420) Data frame received for 3 I0504 23:51:27.190112 7 log.go:172] (0xc002cc9d60) (3) Data frame handling I0504 23:51:27.192611 7 log.go:172] (0xc002b5c420) Data frame received for 1 I0504 23:51:27.192653 7 log.go:172] (0xc002e0c3c0) (1) Data frame handling I0504 23:51:27.192686 7 log.go:172] (0xc002e0c3c0) (1) Data frame sent I0504 23:51:27.192816 7 log.go:172] (0xc002b5c420) (0xc002e0c3c0) Stream removed, broadcasting: 1 I0504 23:51:27.192858 7 log.go:172] (0xc002b5c420) Go away received I0504 23:51:27.193061 7 log.go:172] (0xc002b5c420) (0xc002e0c3c0) Stream removed, broadcasting: 1 I0504 23:51:27.193074 7 log.go:172] (0xc002b5c420) (0xc002cc9d60) Stream removed, broadcasting: 3 I0504 23:51:27.193084 7 log.go:172] (0xc002b5c420) (0xc00037c8c0) Stream removed, broadcasting: 5 May 4 23:51:27.193: INFO: Waiting for responses: map[] May 4 23:51:27.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.25:8080/dial?request=hostname&protocol=udp&host=10.244.2.188&port=8081&tries=1'] Namespace:pod-network-test-463 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 23:51:27.196: INFO: >>> kubeConfig: /root/.kube/config I0504 23:51:27.222670 7 log.go:172] (0xc002a9c580) (0xc000397cc0) Create stream I0504 23:51:27.222700 7 log.go:172] (0xc002a9c580) (0xc000397cc0) Stream added, broadcasting: 1 I0504 23:51:27.224901 7 log.go:172] (0xc002a9c580) Reply frame received for 1 I0504 23:51:27.224943 7 log.go:172] (0xc002a9c580) (0xc000d8d5e0) Create stream I0504 23:51:27.224954 7 log.go:172] (0xc002a9c580) (0xc000d8d5e0) Stream added, broadcasting: 3 I0504 23:51:27.225783 7 log.go:172] (0xc002a9c580) Reply frame received for 3 I0504 23:51:27.225819 7 log.go:172] (0xc002a9c580) (0xc00106d040) Create stream I0504 23:51:27.225831 7 log.go:172] (0xc002a9c580) (0xc00106d040) Stream added, broadcasting: 5 I0504 23:51:27.226516 7 log.go:172] (0xc002a9c580) Reply frame received for 5 I0504 23:51:27.283678 7 log.go:172] (0xc002a9c580) Data frame received for 3 I0504 23:51:27.283717 7 log.go:172] (0xc000d8d5e0) (3) Data frame handling I0504 23:51:27.283737 7 log.go:172] (0xc000d8d5e0) (3) Data frame sent I0504 23:51:27.283957 7 log.go:172] (0xc002a9c580) Data frame received for 3 I0504 23:51:27.284003 7 log.go:172] (0xc000d8d5e0) (3) Data frame handling I0504 23:51:27.284240 7 log.go:172] (0xc002a9c580) Data frame received for 5 I0504 23:51:27.284258 7 log.go:172] (0xc00106d040) (5) Data frame handling I0504 23:51:27.286126 7 log.go:172] (0xc002a9c580) Data frame received for 1 I0504 23:51:27.286151 7 log.go:172] (0xc000397cc0) (1) Data frame handling I0504 23:51:27.286160 7 log.go:172] (0xc000397cc0) (1) Data frame sent I0504 23:51:27.286171 7 log.go:172] (0xc002a9c580) (0xc000397cc0) Stream removed, broadcasting: 1 I0504 23:51:27.286195 7 log.go:172] (0xc002a9c580) Go away received I0504 23:51:27.286286 7 log.go:172] (0xc002a9c580) (0xc000397cc0) Stream removed, broadcasting: 1 I0504 23:51:27.286312 7 log.go:172] (0xc002a9c580) (0xc000d8d5e0) Stream removed, broadcasting: 3 I0504 23:51:27.286323 7 log.go:172] (0xc002a9c580) (0xc00106d040) Stream removed, broadcasting: 5 May 4 23:51:27.286: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:27.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-463" for this suite. • [SLOW TEST:26.430 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":482,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:27.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-67fca6dd-1730-4f01-ae8c-e339150b4f2e STEP: Creating a pod to test consume secrets May 4 23:51:27.396: INFO: Waiting up to 5m0s for pod "pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8" in namespace "secrets-5530" to be "Succeeded or Failed" May 4 23:51:27.418: INFO: Pod "pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.964933ms May 4 23:51:29.423: INFO: Pod "pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02661766s May 4 23:51:31.427: INFO: Pod "pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030754282s STEP: Saw pod success May 4 23:51:31.427: INFO: Pod "pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8" satisfied condition "Succeeded or Failed" May 4 23:51:31.429: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8 container secret-volume-test: STEP: delete the pod May 4 23:51:31.599: INFO: Waiting for pod pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8 to disappear May 4 23:51:31.623: INFO: Pod pod-secrets-2244d7ea-1caf-40a3-8ff7-40ea5ca316a8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5530" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":494,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:31.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0504 23:51:33.270132 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 4 23:51:33.270: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:33.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6725" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":32,"skipped":507,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:33.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:51:33.694: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 4 23:51:36.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9604 create -f -' May 4 23:51:41.187: INFO: stderr: "" May 4 23:51:41.187: INFO: stdout: "e2e-test-crd-publish-openapi-5345-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 4 23:51:41.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9604 delete e2e-test-crd-publish-openapi-5345-crds test-cr' May 4 23:51:41.303: INFO: stderr: "" May 4 23:51:41.303: INFO: stdout: "e2e-test-crd-publish-openapi-5345-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 4 23:51:41.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9604 apply -f -' May 4 23:51:41.562: INFO: stderr: "" May 4 23:51:41.562: INFO: stdout: "e2e-test-crd-publish-openapi-5345-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 4 23:51:41.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9604 delete e2e-test-crd-publish-openapi-5345-crds test-cr' May 4 23:51:41.695: INFO: stderr: "" May 4 23:51:41.695: INFO: stdout: "e2e-test-crd-publish-openapi-5345-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 4 23:51:41.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5345-crds' May 4 23:51:41.950: INFO: stderr: "" May 4 23:51:41.950: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5345-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:44.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9604" for this suite. • [SLOW TEST:11.489 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":33,"skipped":512,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:44.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-b2e4bfe2-2c78-4625-8514-540de5bf4d3f STEP: Creating a pod to test consume secrets May 4 23:51:44.978: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978" in namespace "projected-6319" to be "Succeeded or Failed" May 4 23:51:45.008: INFO: Pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978": Phase="Pending", Reason="", readiness=false. Elapsed: 29.754495ms May 4 23:51:47.012: INFO: Pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033605263s May 4 23:51:49.016: INFO: Pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978": Phase="Running", Reason="", readiness=true. Elapsed: 4.037064022s May 4 23:51:51.019: INFO: Pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041025155s STEP: Saw pod success May 4 23:51:51.020: INFO: Pod "pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978" satisfied condition "Succeeded or Failed" May 4 23:51:51.023: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978 container projected-secret-volume-test: STEP: delete the pod May 4 23:51:51.059: INFO: Waiting for pod pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978 to disappear May 4 23:51:51.079: INFO: Pod pod-projected-secrets-ffbcdc08-da2f-4aa9-a8b8-6336d6ca5978 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:51:51.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6319" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:51:51.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-6cw6 STEP: Creating a pod to test atomic-volume-subpath May 4 23:51:51.183: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6cw6" in namespace "subpath-2677" to be "Succeeded or Failed" May 4 23:51:51.216: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.510328ms May 4 23:51:53.281: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098527102s May 4 23:51:55.285: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.102645297s May 4 23:51:57.290: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.107113745s May 4 23:51:59.295: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.111800844s May 4 23:52:01.323: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.139906259s May 4 23:52:03.327: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.143889823s May 4 23:52:05.330: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.147737056s May 4 23:52:07.334: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.151390793s May 4 23:52:09.347: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.164014121s May 4 23:52:11.365: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.181785404s May 4 23:52:13.372: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Running", Reason="", readiness=true. Elapsed: 22.18947581s May 4 23:52:15.387: INFO: Pod "pod-subpath-test-secret-6cw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.204132987s STEP: Saw pod success May 4 23:52:15.387: INFO: Pod "pod-subpath-test-secret-6cw6" satisfied condition "Succeeded or Failed" May 4 23:52:15.431: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-6cw6 container test-container-subpath-secret-6cw6: STEP: delete the pod May 4 23:52:15.474: INFO: Waiting for pod pod-subpath-test-secret-6cw6 to disappear May 4 23:52:15.480: INFO: Pod pod-subpath-test-secret-6cw6 no longer exists STEP: Deleting pod pod-subpath-test-secret-6cw6 May 4 23:52:15.480: INFO: Deleting pod "pod-subpath-test-secret-6cw6" in namespace "subpath-2677" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:52:15.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2677" for this suite. • [SLOW TEST:24.403 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":35,"skipped":563,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:52:15.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 23:52:16.259: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 23:52:18.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233136, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233136, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233136, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233136, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 23:52:21.306: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:52:21.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9696" for this suite. STEP: Destroying namespace "webhook-9696-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.136 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":36,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:52:21.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 4 23:52:21.680: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:52:37.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6038" for this suite. • [SLOW TEST:15.630 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":37,"skipped":611,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:52:37.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 4 23:52:37.359: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7765 /api/v1/namespaces/watch-7765/configmaps/e2e-watch-test-watch-closed 2e4b17a2-2088-43b6-9ee7-f82ffa6dd6e0 1515610 0 2020-05-04 23:52:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-04 23:52:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:52:37.359: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7765 /api/v1/namespaces/watch-7765/configmaps/e2e-watch-test-watch-closed 2e4b17a2-2088-43b6-9ee7-f82ffa6dd6e0 1515611 0 2020-05-04 23:52:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-04 23:52:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 4 23:52:37.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7765 /api/v1/namespaces/watch-7765/configmaps/e2e-watch-test-watch-closed 2e4b17a2-2088-43b6-9ee7-f82ffa6dd6e0 1515612 0 2020-05-04 23:52:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-04 23:52:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:52:37.382: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7765 /api/v1/namespaces/watch-7765/configmaps/e2e-watch-test-watch-closed 2e4b17a2-2088-43b6-9ee7-f82ffa6dd6e0 1515613 0 2020-05-04 23:52:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-04 23:52:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:52:37.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7765" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":38,"skipped":624,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:52:37.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 23:52:38.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 23:52:40.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233158, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233158, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233158, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233158, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 23:52:43.169: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:52:43.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2963" for this suite. STEP: Destroying namespace "webhook-2963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":39,"skipped":640,"failed":0} [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:52:43.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1283, will wait for the garbage collector to delete the pods May 4 23:52:47.675: INFO: Deleting Job.batch foo took: 6.764425ms May 4 23:52:47.775: INFO: Terminating Job.batch foo pods took: 100.226754ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:53:24.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1283" for this suite. • [SLOW TEST:41.521 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":40,"skipped":640,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:53:25.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:53:25.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 4 23:53:25.234: INFO: stderr: "" May 4 23:53:25.235: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.2.298+0bcbe384d866b9\", GitCommit:\"0bcbe384d866b9cf4b51d0a2905befc538e99db7\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T18:23:02Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:53:25.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3931" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":41,"skipped":642,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:53:25.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 4 23:53:25.284: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:53:41.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6941" for this suite. • [SLOW TEST:16.097 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":42,"skipped":655,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:53:41.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 4 23:53:45.999: INFO: Successfully updated pod "annotationupdateadd0ea34-3e16-4609-9afc-04c5063fba85" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:53:50.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8009" for this suite. • [SLOW TEST:8.768 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":666,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:53:50.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:53:50.182: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 4 23:53:50.294: INFO: Pod name sample-pod: Found 0 pods out of 1 May 4 23:53:55.298: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 23:53:55.298: INFO: Creating deployment "test-rolling-update-deployment" May 4 23:53:55.302: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 4 23:53:55.367: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 4 23:53:57.373: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 4 23:53:57.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233235, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233235, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233235, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233235, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 23:53:59.379: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 4 23:53:59.387: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6152 /apis/apps/v1/namespaces/deployment-6152/deployments/test-rolling-update-deployment 9cf7047f-659f-4401-a9bf-1449ff5e1a7d 1516083 1 2020-05-04 23:53:55 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-04 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-04 23:53:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048c31f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-04 23:53:55 +0000 UTC,LastTransitionTime:2020-05-04 23:53:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-04 23:53:58 +0000 UTC,LastTransitionTime:2020-05-04 23:53:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 4 23:53:59.390: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-6152 /apis/apps/v1/namespaces/deployment-6152/replicasets/test-rolling-update-deployment-df7bb669b 3baf8869-9780-41de-9499-3c1de691b4d2 1516070 1 2020-05-04 23:53:55 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9cf7047f-659f-4401-a9bf-1449ff5e1a7d 0xc0048c37f0 0xc0048c37f1}] [] [{kube-controller-manager Update apps/v1 2020-05-04 23:53:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9cf7047f-659f-4401-a9bf-1449ff5e1a7d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048c3868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 23:53:59.390: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 4 23:53:59.390: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6152 /apis/apps/v1/namespaces/deployment-6152/replicasets/test-rolling-update-controller a02d7530-2a4b-4743-9e2d-533a8451d3c0 1516082 2 2020-05-04 23:53:50 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9cf7047f-659f-4401-a9bf-1449ff5e1a7d 0xc0048c36cf 0xc0048c36f0}] [] [{e2e.test Update apps/v1 2020-05-04 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-04 23:53:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9cf7047f-659f-4401-a9bf-1449ff5e1a7d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048c3788 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 23:53:59.394: INFO: Pod "test-rolling-update-deployment-df7bb669b-2tw6n" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-2tw6n test-rolling-update-deployment-df7bb669b- deployment-6152 /api/v1/namespaces/deployment-6152/pods/test-rolling-update-deployment-df7bb669b-2tw6n 7d9c447b-59a6-4b69-aeb0-8a94e0641c49 1516069 0 2020-05-04 23:53:55 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 3baf8869-9780-41de-9499-3c1de691b4d2 0xc0048c3da0 0xc0048c3da1}] [] [{kube-controller-manager Update v1 2020-05-04 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3baf8869-9780-41de-9499-3c1de691b4d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-04 23:53:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ltt7b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ltt7b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ltt7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 23:53:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 23:53:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 23:53:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 23:53:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.193,StartTime:2020-05-04 23:53:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 23:53:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://237a9fe633c491bdc0b266eb7a595cb7fc051bd1177069d7f83921f34fbec0a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:53:59.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6152" for this suite. • [SLOW TEST:9.292 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":44,"skipped":674,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:53:59.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-86b7fb49-1838-4cd4-a985-63184b5b493a in namespace container-probe-2881 May 4 23:54:03.695: INFO: Started pod liveness-86b7fb49-1838-4cd4-a985-63184b5b493a in namespace container-probe-2881 STEP: checking the pod's current state and verifying that restartCount is present May 4 23:54:03.698: INFO: Initial restart count of pod liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is 0 May 4 23:54:19.736: INFO: Restart count of pod container-probe-2881/liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is now 1 (16.037927449s elapsed) May 4 23:54:39.795: INFO: Restart count of pod container-probe-2881/liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is now 2 (36.096224698s elapsed) May 4 23:54:59.951: INFO: Restart count of pod container-probe-2881/liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is now 3 (56.252342538s elapsed) May 4 23:55:19.993: INFO: Restart count of pod container-probe-2881/liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is now 4 (1m16.294619765s elapsed) May 4 23:56:32.336: INFO: Restart count of pod container-probe-2881/liveness-86b7fb49-1838-4cd4-a985-63184b5b493a is now 5 (2m28.637401153s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:56:32.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2881" for this suite. • [SLOW TEST:153.000 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":45,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:56:32.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 4 23:56:36.794: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:56:36.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7121" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:56:36.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0504 23:56:38.080423 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 4 23:56:38.080: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:56:38.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7457" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":47,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:56:38.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7941.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 23:56:46.419: INFO: DNS probes using dns-7941/dns-test-5de349e3-0d24-4887-88c4-82f672e9cf11 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:56:46.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7941" for this suite. • [SLOW TEST:8.524 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":48,"skipped":761,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:56:46.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 23:56:47.266: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 4 23:56:47.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:47.317: INFO: Number of nodes with available pods: 0 May 4 23:56:47.317: INFO: Node latest-worker is running more than one daemon pod May 4 23:56:48.323: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:48.326: INFO: Number of nodes with available pods: 0 May 4 23:56:48.326: INFO: Node latest-worker is running more than one daemon pod May 4 23:56:49.463: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:49.655: INFO: Number of nodes with available pods: 0 May 4 23:56:49.655: INFO: Node latest-worker is running more than one daemon pod May 4 23:56:50.322: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:50.326: INFO: Number of nodes with available pods: 0 May 4 23:56:50.326: INFO: Node latest-worker is running more than one daemon pod May 4 23:56:51.323: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:51.326: INFO: Number of nodes with available pods: 0 May 4 23:56:51.326: INFO: Node latest-worker is running more than one daemon pod May 4 23:56:52.323: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:52.327: INFO: Number of nodes with available pods: 2 May 4 23:56:52.327: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 4 23:56:52.379: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:52.379: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:52.407: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:53.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:53.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:53.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:54.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:54.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:54.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:55.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:55.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:55.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:56.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:56.412: INFO: Pod daemon-set-vr488 is not available May 4 23:56:56.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:56.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:57.411: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:57.412: INFO: Pod daemon-set-vr488 is not available May 4 23:56:57.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:57.415: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:58.411: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:58.411: INFO: Pod daemon-set-vr488 is not available May 4 23:56:58.411: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:58.415: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:56:59.413: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:59.413: INFO: Pod daemon-set-vr488 is not available May 4 23:56:59.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:56:59.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:00.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:00.413: INFO: Pod daemon-set-vr488 is not available May 4 23:57:00.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:00.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:01.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:01.413: INFO: Pod daemon-set-vr488 is not available May 4 23:57:01.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:01.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:02.412: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:02.412: INFO: Pod daemon-set-vr488 is not available May 4 23:57:02.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:02.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:03.413: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:03.413: INFO: Pod daemon-set-vr488 is not available May 4 23:57:03.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:03.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:04.413: INFO: Wrong image for pod: daemon-set-vr488. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:04.413: INFO: Pod daemon-set-vr488 is not available May 4 23:57:04.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:04.418: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:05.413: INFO: Pod daemon-set-xrs2m is not available May 4 23:57:05.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:05.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:06.553: INFO: Pod daemon-set-xrs2m is not available May 4 23:57:06.553: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:06.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:07.445: INFO: Pod daemon-set-xrs2m is not available May 4 23:57:07.445: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:07.514: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:08.412: INFO: Pod daemon-set-xrs2m is not available May 4 23:57:08.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:08.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:09.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:09.418: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:10.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:10.413: INFO: Pod daemon-set-zc22g is not available May 4 23:57:10.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:11.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:11.413: INFO: Pod daemon-set-zc22g is not available May 4 23:57:11.416: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:12.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:12.413: INFO: Pod daemon-set-zc22g is not available May 4 23:57:12.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:13.412: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:13.412: INFO: Pod daemon-set-zc22g is not available May 4 23:57:13.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:14.413: INFO: Wrong image for pod: daemon-set-zc22g. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 4 23:57:14.413: INFO: Pod daemon-set-zc22g is not available May 4 23:57:14.420: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:15.425: INFO: Pod daemon-set-zksbh is not available May 4 23:57:15.428: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 4 23:57:15.470: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:15.473: INFO: Number of nodes with available pods: 1 May 4 23:57:15.473: INFO: Node latest-worker2 is running more than one daemon pod May 4 23:57:16.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:16.539: INFO: Number of nodes with available pods: 1 May 4 23:57:16.539: INFO: Node latest-worker2 is running more than one daemon pod May 4 23:57:17.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:17.479: INFO: Number of nodes with available pods: 1 May 4 23:57:17.479: INFO: Node latest-worker2 is running more than one daemon pod May 4 23:57:18.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:18.483: INFO: Number of nodes with available pods: 1 May 4 23:57:18.483: INFO: Node latest-worker2 is running more than one daemon pod May 4 23:57:19.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 23:57:19.483: INFO: Number of nodes with available pods: 2 May 4 23:57:19.483: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1812, will wait for the garbage collector to delete the pods May 4 23:57:19.557: INFO: Deleting DaemonSet.extensions daemon-set took: 7.425074ms May 4 23:57:19.857: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240625ms May 4 23:57:34.960: INFO: Number of nodes with available pods: 0 May 4 23:57:34.960: INFO: Number of running nodes: 0, number of available pods: 0 May 4 23:57:34.963: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1812/daemonsets","resourceVersion":"1516982"},"items":null} May 4 23:57:34.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1812/pods","resourceVersion":"1516982"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:57:34.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1812" for this suite. • [SLOW TEST:48.373 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":49,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:57:34.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5755/configmap-test-344e3bd5-62f7-4c61-a088-f00f1bba79db STEP: Creating a pod to test consume configMaps May 4 23:57:35.086: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca" in namespace "configmap-5755" to be "Succeeded or Failed" May 4 23:57:35.103: INFO: Pod "pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca": Phase="Pending", Reason="", readiness=false. Elapsed: 17.556884ms May 4 23:57:37.107: INFO: Pod "pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021016025s May 4 23:57:39.111: INFO: Pod "pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025485921s STEP: Saw pod success May 4 23:57:39.111: INFO: Pod "pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca" satisfied condition "Succeeded or Failed" May 4 23:57:39.114: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca container env-test: STEP: delete the pod May 4 23:57:39.149: INFO: Waiting for pod pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca to disappear May 4 23:57:39.153: INFO: Pod pod-configmaps-d4f6353d-a98c-4a0d-8059-922b150803ca no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:57:39.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5755" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:57:39.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 4 23:57:39.517: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517017 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:57:39.517: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517017 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 4 23:57:49.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517087 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:57:49.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517087 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 4 23:57:59.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517113 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:57:59.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517113 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 4 23:58:09.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517143 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:58:09.546: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-a efa87052-abce-4b7c-98c1-cae9d23869d5 1517143 0 2020-05-04 23:57:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-04 23:57:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 4 23:58:19.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-b de6755cc-7b01-4d15-9b5c-32a042920304 1517173 0 2020-05-04 23:58:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-04 23:58:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:58:19.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-b de6755cc-7b01-4d15-9b5c-32a042920304 1517173 0 2020-05-04 23:58:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-04 23:58:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 4 23:58:29.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-b de6755cc-7b01-4d15-9b5c-32a042920304 1517204 0 2020-05-04 23:58:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-04 23:58:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 4 23:58:29.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8185 /api/v1/namespaces/watch-8185/configmaps/e2e-watch-test-configmap-b de6755cc-7b01-4d15-9b5c-32a042920304 1517204 0 2020-05-04 23:58:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-04 23:58:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:58:39.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8185" for this suite. • [SLOW TEST:60.410 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":51,"skipped":830,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:58:39.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-25ad5931-b3c9-46c8-84b3-57fde42c77ac STEP: Creating a pod to test consume secrets May 4 23:58:39.702: INFO: Waiting up to 5m0s for pod "pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba" in namespace "secrets-9305" to be "Succeeded or Failed" May 4 23:58:39.745: INFO: Pod "pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba": Phase="Pending", Reason="", readiness=false. Elapsed: 42.111844ms May 4 23:58:41.750: INFO: Pod "pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047037894s May 4 23:58:43.754: INFO: Pod "pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051384677s STEP: Saw pod success May 4 23:58:43.754: INFO: Pod "pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba" satisfied condition "Succeeded or Failed" May 4 23:58:43.757: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba container secret-volume-test: STEP: delete the pod May 4 23:58:43.806: INFO: Waiting for pod pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba to disappear May 4 23:58:43.818: INFO: Pod pod-secrets-02a7f712-d1a9-4804-b3b5-cb7c062554ba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 23:58:43.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9305" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":831,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 23:58:43.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-a1499a3b-830c-4607-b021-d17471bc4d71 in namespace container-probe-2345 May 4 23:58:47.980: INFO: Started pod busybox-a1499a3b-830c-4607-b021-d17471bc4d71 in namespace container-probe-2345 STEP: checking the pod's current state and verifying that restartCount is present May 4 23:58:47.984: INFO: Initial restart count of pod busybox-a1499a3b-830c-4607-b021-d17471bc4d71 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:02:48.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2345" for this suite. • [SLOW TEST:245.065 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:02:48.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:02.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1752" for this suite. • [SLOW TEST:13.287 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":54,"skipped":860,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:02.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:03:06.501: INFO: Waiting up to 5m0s for pod "client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b" in namespace "pods-5951" to be "Succeeded or Failed" May 5 00:03:06.505: INFO: Pod "client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.19108ms May 5 00:03:08.510: INFO: Pod "client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008263678s May 5 00:03:10.555: INFO: Pod "client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05377812s STEP: Saw pod success May 5 00:03:10.555: INFO: Pod "client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b" satisfied condition "Succeeded or Failed" May 5 00:03:10.558: INFO: Trying to get logs from node latest-worker pod client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b container env3cont: STEP: delete the pod May 5 00:03:10.590: INFO: Waiting for pod client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b to disappear May 5 00:03:10.595: INFO: Pod client-envvars-d646e139-0884-4e79-9184-ef7e3edd1b9b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5951" for this suite. • [SLOW TEST:8.422 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":55,"skipped":875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:10.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 5 00:03:10.936: INFO: Waiting up to 5m0s for pod "pod-3d639e8c-7dca-418c-9343-93c5042e4ead" in namespace "emptydir-2335" to be "Succeeded or Failed" May 5 00:03:10.949: INFO: Pod "pod-3d639e8c-7dca-418c-9343-93c5042e4ead": Phase="Pending", Reason="", readiness=false. Elapsed: 12.502246ms May 5 00:03:12.953: INFO: Pod "pod-3d639e8c-7dca-418c-9343-93c5042e4ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016740658s May 5 00:03:14.957: INFO: Pod "pod-3d639e8c-7dca-418c-9343-93c5042e4ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021257585s STEP: Saw pod success May 5 00:03:14.957: INFO: Pod "pod-3d639e8c-7dca-418c-9343-93c5042e4ead" satisfied condition "Succeeded or Failed" May 5 00:03:14.960: INFO: Trying to get logs from node latest-worker pod pod-3d639e8c-7dca-418c-9343-93c5042e4ead container test-container: STEP: delete the pod May 5 00:03:15.206: INFO: Waiting for pod pod-3d639e8c-7dca-418c-9343-93c5042e4ead to disappear May 5 00:03:15.251: INFO: Pod pod-3d639e8c-7dca-418c-9343-93c5042e4ead no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:15.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2335" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":937,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:15.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 5 00:03:15.512: INFO: Waiting up to 5m0s for pod "var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3" in namespace "var-expansion-1165" to be "Succeeded or Failed" May 5 00:03:15.520: INFO: Pod "var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.437976ms May 5 00:03:17.543: INFO: Pod "var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031351875s May 5 00:03:19.547: INFO: Pod "var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035631606s STEP: Saw pod success May 5 00:03:19.547: INFO: Pod "var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3" satisfied condition "Succeeded or Failed" May 5 00:03:19.567: INFO: Trying to get logs from node latest-worker pod var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3 container dapi-container: STEP: delete the pod May 5 00:03:19.631: INFO: Waiting for pod var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3 to disappear May 5 00:03:19.637: INFO: Pod var-expansion-7b41d2f2-a4be-488f-9b88-c23063bdfef3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:19.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1165" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":941,"failed":0} ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:19.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:19.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3259" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":58,"skipped":941,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:19.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:03:20.751: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:03:22.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233800, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233800, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233800, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233800, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:03:25.836: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 5 00:03:29.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-8849 to-be-attached-pod -i -c=container1' May 5 00:03:33.543: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8849" for this suite. STEP: Destroying namespace "webhook-8849-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.768 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":59,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:33.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:03:34.576: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:03:36.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:03:38.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724233814, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:03:41.644: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:41.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9630" for this suite. STEP: Destroying namespace "webhook-9630-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.276 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":60,"skipped":982,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:41.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 5 00:03:42.046: INFO: Waiting up to 5m0s for pod "downward-api-40730dad-b375-4bcb-9d37-161840688bf1" in namespace "downward-api-4581" to be "Succeeded or Failed" May 5 00:03:42.143: INFO: Pod "downward-api-40730dad-b375-4bcb-9d37-161840688bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 96.767504ms May 5 00:03:44.145: INFO: Pod "downward-api-40730dad-b375-4bcb-9d37-161840688bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099564369s May 5 00:03:46.149: INFO: Pod "downward-api-40730dad-b375-4bcb-9d37-161840688bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103599228s STEP: Saw pod success May 5 00:03:46.149: INFO: Pod "downward-api-40730dad-b375-4bcb-9d37-161840688bf1" satisfied condition "Succeeded or Failed" May 5 00:03:46.152: INFO: Trying to get logs from node latest-worker2 pod downward-api-40730dad-b375-4bcb-9d37-161840688bf1 container dapi-container: STEP: delete the pod May 5 00:03:46.217: INFO: Waiting for pod downward-api-40730dad-b375-4bcb-9d37-161840688bf1 to disappear May 5 00:03:46.229: INFO: Pod downward-api-40730dad-b375-4bcb-9d37-161840688bf1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:46.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4581" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":986,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:46.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:03:46.287: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:47.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6726" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":62,"skipped":993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:47.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 00:03:47.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5803' May 5 00:03:47.792: INFO: stderr: "" May 5 00:03:47.792: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 5 00:03:47.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5803' May 5 00:03:51.610: INFO: stderr: "" May 5 00:03:51.611: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:03:51.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5803" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":63,"skipped":1022,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:03:51.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 5 00:03:51.810: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 00:03:51.912: INFO: Waiting for terminating namespaces to be deleted... May 5 00:03:51.915: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 5 00:03:51.919: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:03:51.919: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:03:51.919: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:03:51.919: INFO: Container kube-proxy ready: true, restart count 0 May 5 00:03:51.919: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 5 00:03:51.924: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:03:51.924: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:03:51.924: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:03:51.924: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0204a24a-6668-4bd7-89b0-9c36c8fd932b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0204a24a-6668-4bd7-89b0-9c36c8fd932b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0204a24a-6668-4bd7-89b0-9c36c8fd932b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:08.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5721" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.606 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":64,"skipped":1025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:08.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 00:04:13.548: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5010" for this suite. • [SLOW TEST:5.442 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1152,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:13.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 5 00:04:13.940: INFO: >>> kubeConfig: /root/.kube/config May 5 00:04:15.991: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:26.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7986" for this suite. • [SLOW TEST:13.050 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":66,"skipped":1160,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:26.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-175 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-175 I0505 00:04:26.884865 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-175, replica count: 2 I0505 00:04:29.935261 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:04:32.935485 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:04:32.935: INFO: Creating new exec pod May 5 00:04:37.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-175 execpodwbf9p -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 5 00:04:38.178: INFO: stderr: "I0505 00:04:38.098002 977 log.go:172] (0xc00050c000) (0xc0003e81e0) Create stream\nI0505 00:04:38.098074 977 log.go:172] (0xc00050c000) (0xc0003e81e0) Stream added, broadcasting: 1\nI0505 00:04:38.099832 977 log.go:172] (0xc00050c000) Reply frame received for 1\nI0505 00:04:38.099871 977 log.go:172] (0xc00050c000) (0xc0003e9180) Create stream\nI0505 00:04:38.099883 977 log.go:172] (0xc00050c000) (0xc0003e9180) Stream added, broadcasting: 3\nI0505 00:04:38.100830 977 log.go:172] (0xc00050c000) Reply frame received for 3\nI0505 00:04:38.100860 977 log.go:172] (0xc00050c000) (0xc000374d20) Create stream\nI0505 00:04:38.100874 977 log.go:172] (0xc00050c000) (0xc000374d20) Stream added, broadcasting: 5\nI0505 00:04:38.102076 977 log.go:172] (0xc00050c000) Reply frame received for 5\nI0505 00:04:38.171904 977 log.go:172] (0xc00050c000) Data frame received for 3\nI0505 00:04:38.171941 977 log.go:172] (0xc0003e9180) (3) Data frame handling\nI0505 00:04:38.171970 977 log.go:172] (0xc00050c000) Data frame received for 5\nI0505 00:04:38.171983 977 log.go:172] (0xc000374d20) (5) Data frame handling\nI0505 00:04:38.171991 977 log.go:172] (0xc000374d20) (5) Data frame sent\nI0505 00:04:38.171999 977 log.go:172] (0xc00050c000) Data frame received for 5\nI0505 00:04:38.172005 977 log.go:172] (0xc000374d20) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0505 00:04:38.172078 977 log.go:172] (0xc000374d20) (5) Data frame sent\nI0505 00:04:38.172101 977 log.go:172] (0xc00050c000) Data frame received for 5\nI0505 00:04:38.172113 977 log.go:172] (0xc000374d20) (5) Data frame handling\nI0505 00:04:38.174313 977 log.go:172] (0xc00050c000) Data frame received for 1\nI0505 00:04:38.174340 977 log.go:172] (0xc0003e81e0) (1) Data frame handling\nI0505 00:04:38.174358 977 log.go:172] (0xc0003e81e0) (1) Data frame sent\nI0505 00:04:38.174382 977 log.go:172] (0xc00050c000) (0xc0003e81e0) Stream removed, broadcasting: 1\nI0505 00:04:38.174399 977 log.go:172] (0xc00050c000) Go away received\nI0505 00:04:38.174738 977 log.go:172] (0xc00050c000) (0xc0003e81e0) Stream removed, broadcasting: 1\nI0505 00:04:38.174754 977 log.go:172] (0xc00050c000) (0xc0003e9180) Stream removed, broadcasting: 3\nI0505 00:04:38.174762 977 log.go:172] (0xc00050c000) (0xc000374d20) Stream removed, broadcasting: 5\n" May 5 00:04:38.179: INFO: stdout: "" May 5 00:04:38.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-175 execpodwbf9p -- /bin/sh -x -c nc -zv -t -w 2 10.96.232.232 80' May 5 00:04:38.399: INFO: stderr: "I0505 00:04:38.317709 997 log.go:172] (0xc00096ec60) (0xc000830000) Create stream\nI0505 00:04:38.317759 997 log.go:172] (0xc00096ec60) (0xc000830000) Stream added, broadcasting: 1\nI0505 00:04:38.319906 997 log.go:172] (0xc00096ec60) Reply frame received for 1\nI0505 00:04:38.319943 997 log.go:172] (0xc00096ec60) (0xc0006310e0) Create stream\nI0505 00:04:38.319960 997 log.go:172] (0xc00096ec60) (0xc0006310e0) Stream added, broadcasting: 3\nI0505 00:04:38.320857 997 log.go:172] (0xc00096ec60) Reply frame received for 3\nI0505 00:04:38.320905 997 log.go:172] (0xc00096ec60) (0xc000830fa0) Create stream\nI0505 00:04:38.320921 997 log.go:172] (0xc00096ec60) (0xc000830fa0) Stream added, broadcasting: 5\nI0505 00:04:38.321856 997 log.go:172] (0xc00096ec60) Reply frame received for 5\nI0505 00:04:38.393359 997 log.go:172] (0xc00096ec60) Data frame received for 3\nI0505 00:04:38.393401 997 log.go:172] (0xc00096ec60) Data frame received for 5\nI0505 00:04:38.393419 997 log.go:172] (0xc000830fa0) (5) Data frame handling\nI0505 00:04:38.393427 997 log.go:172] (0xc000830fa0) (5) Data frame sent\nI0505 00:04:38.393432 997 log.go:172] (0xc00096ec60) Data frame received for 5\nI0505 00:04:38.393437 997 log.go:172] (0xc000830fa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.232.232 80\nConnection to 10.96.232.232 80 port [tcp/http] succeeded!\nI0505 00:04:38.393464 997 log.go:172] (0xc0006310e0) (3) Data frame handling\nI0505 00:04:38.394883 997 log.go:172] (0xc00096ec60) Data frame received for 1\nI0505 00:04:38.394907 997 log.go:172] (0xc000830000) (1) Data frame handling\nI0505 00:04:38.394921 997 log.go:172] (0xc000830000) (1) Data frame sent\nI0505 00:04:38.394966 997 log.go:172] (0xc00096ec60) (0xc000830000) Stream removed, broadcasting: 1\nI0505 00:04:38.394985 997 log.go:172] (0xc00096ec60) Go away received\nI0505 00:04:38.395345 997 log.go:172] (0xc00096ec60) (0xc000830000) Stream removed, broadcasting: 1\nI0505 00:04:38.395361 997 log.go:172] (0xc00096ec60) (0xc0006310e0) Stream removed, broadcasting: 3\nI0505 00:04:38.395369 997 log.go:172] (0xc00096ec60) (0xc000830fa0) Stream removed, broadcasting: 5\n" May 5 00:04:38.399: INFO: stdout: "" May 5 00:04:38.399: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:38.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-175" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.711 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":67,"skipped":1163,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:38.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-e9a789e6-5513-46d2-939e-405ca0fb169b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:38.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7859" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":68,"skipped":1164,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:38.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:04:38.674: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 00:04:41.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-561 create -f -' May 5 00:04:45.585: INFO: stderr: "" May 5 00:04:45.585: INFO: stdout: "e2e-test-crd-publish-openapi-2759-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 5 00:04:45.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-561 delete e2e-test-crd-publish-openapi-2759-crds test-cr' May 5 00:04:45.811: INFO: stderr: "" May 5 00:04:45.811: INFO: stdout: "e2e-test-crd-publish-openapi-2759-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 5 00:04:45.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-561 apply -f -' May 5 00:04:46.091: INFO: stderr: "" May 5 00:04:46.091: INFO: stdout: "e2e-test-crd-publish-openapi-2759-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 5 00:04:46.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-561 delete e2e-test-crd-publish-openapi-2759-crds test-cr' May 5 00:04:46.199: INFO: stderr: "" May 5 00:04:46.199: INFO: stdout: "e2e-test-crd-publish-openapi-2759-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 5 00:04:46.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2759-crds' May 5 00:04:46.460: INFO: stderr: "" May 5 00:04:46.460: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2759-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:49.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-561" for this suite. • [SLOW TEST:10.864 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":69,"skipped":1181,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:49.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-54087bab-2d2d-440b-ba4c-052e3ed58fae STEP: Creating a pod to test consume secrets May 5 00:04:49.641: INFO: Waiting up to 5m0s for pod "pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669" in namespace "secrets-8556" to be "Succeeded or Failed" May 5 00:04:49.661: INFO: Pod "pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669": Phase="Pending", Reason="", readiness=false. Elapsed: 20.077269ms May 5 00:04:51.665: INFO: Pod "pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024221212s May 5 00:04:53.670: INFO: Pod "pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028734769s STEP: Saw pod success May 5 00:04:53.670: INFO: Pod "pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669" satisfied condition "Succeeded or Failed" May 5 00:04:53.672: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669 container secret-volume-test: STEP: delete the pod May 5 00:04:53.705: INFO: Waiting for pod pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669 to disappear May 5 00:04:53.718: INFO: Pod pod-secrets-cb0d84c5-f887-40b4-a742-65d4a08e8669 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:04:53.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8556" for this suite. STEP: Destroying namespace "secret-namespace-2903" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1193,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:04:53.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 00:04:53.830: INFO: Waiting up to 5m0s for pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d" in namespace "emptydir-8515" to be "Succeeded or Failed" May 5 00:04:53.916: INFO: Pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d": Phase="Pending", Reason="", readiness=false. Elapsed: 86.075434ms May 5 00:04:56.011: INFO: Pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181704551s May 5 00:04:58.015: INFO: Pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185201162s May 5 00:05:00.020: INFO: Pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190033065s STEP: Saw pod success May 5 00:05:00.020: INFO: Pod "pod-d354385a-e9f5-4622-a44c-6b83c768eb1d" satisfied condition "Succeeded or Failed" May 5 00:05:00.023: INFO: Trying to get logs from node latest-worker pod pod-d354385a-e9f5-4622-a44c-6b83c768eb1d container test-container: STEP: delete the pod May 5 00:05:00.087: INFO: Waiting for pod pod-d354385a-e9f5-4622-a44c-6b83c768eb1d to disappear May 5 00:05:00.099: INFO: Pod pod-d354385a-e9f5-4622-a44c-6b83c768eb1d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:00.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8515" for this suite. • [SLOW TEST:6.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1197,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:00.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 5 00:05:00.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7497' May 5 00:05:00.581: INFO: stderr: "" May 5 00:05:00.581: INFO: stdout: "pod/pause created\n" May 5 00:05:00.581: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 5 00:05:00.581: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7497" to be "running and ready" May 5 00:05:00.605: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.048872ms May 5 00:05:02.609: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0276245s May 5 00:05:04.613: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.03168122s May 5 00:05:04.613: INFO: Pod "pause" satisfied condition "running and ready" May 5 00:05:04.613: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 5 00:05:04.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7497' May 5 00:05:04.730: INFO: stderr: "" May 5 00:05:04.730: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 5 00:05:04.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7497' May 5 00:05:04.844: INFO: stderr: "" May 5 00:05:04.844: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 5 00:05:04.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7497' May 5 00:05:04.946: INFO: stderr: "" May 5 00:05:04.946: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 5 00:05:04.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7497' May 5 00:05:05.039: INFO: stderr: "" May 5 00:05:05.039: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 5 00:05:05.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7497' May 5 00:05:05.183: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:05:05.183: INFO: stdout: "pod \"pause\" force deleted\n" May 5 00:05:05.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7497' May 5 00:05:05.490: INFO: stderr: "No resources found in kubectl-7497 namespace.\n" May 5 00:05:05.490: INFO: stdout: "" May 5 00:05:05.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7497 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 00:05:05.633: INFO: stderr: "" May 5 00:05:05.633: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:05.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7497" for this suite. • [SLOW TEST:5.579 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":72,"skipped":1203,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:05.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-9vft STEP: Creating a pod to test atomic-volume-subpath May 5 00:05:05.760: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9vft" in namespace "subpath-7018" to be "Succeeded or Failed" May 5 00:05:05.832: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Pending", Reason="", readiness=false. Elapsed: 71.938494ms May 5 00:05:07.836: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076070322s May 5 00:05:09.840: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 4.080550139s May 5 00:05:11.845: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 6.085534141s May 5 00:05:13.855: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 8.09559513s May 5 00:05:15.860: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 10.099906345s May 5 00:05:17.864: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 12.104373514s May 5 00:05:19.869: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 14.108671131s May 5 00:05:21.873: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 16.113381121s May 5 00:05:23.878: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 18.117749469s May 5 00:05:25.882: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 20.12231636s May 5 00:05:27.887: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Running", Reason="", readiness=true. Elapsed: 22.126759343s May 5 00:05:29.891: INFO: Pod "pod-subpath-test-downwardapi-9vft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130850665s STEP: Saw pod success May 5 00:05:29.891: INFO: Pod "pod-subpath-test-downwardapi-9vft" satisfied condition "Succeeded or Failed" May 5 00:05:29.894: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-9vft container test-container-subpath-downwardapi-9vft: STEP: delete the pod May 5 00:05:30.087: INFO: Waiting for pod pod-subpath-test-downwardapi-9vft to disappear May 5 00:05:30.100: INFO: Pod pod-subpath-test-downwardapi-9vft no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9vft May 5 00:05:30.100: INFO: Deleting pod "pod-subpath-test-downwardapi-9vft" in namespace "subpath-7018" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:30.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7018" for this suite. • [SLOW TEST:24.435 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":73,"skipped":1207,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:30.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8743.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8743.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:05:36.250: INFO: DNS probes using dns-8743/dns-test-61272cd6-c4cb-4014-9929-b79dece2da82 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:36.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8743" for this suite. • [SLOW TEST:6.241 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":74,"skipped":1220,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:36.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:05:36.768: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 5 00:05:38.861: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:40.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6804" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":75,"skipped":1233,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:40.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:51.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7854" for this suite. • [SLOW TEST:11.654 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":76,"skipped":1234,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:51.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:05:51.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6" in namespace "projected-198" to be "Succeeded or Failed" May 5 00:05:51.804: INFO: Pod "downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.98716ms May 5 00:05:53.849: INFO: Pod "downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047839241s May 5 00:05:55.854: INFO: Pod "downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0520743s STEP: Saw pod success May 5 00:05:55.854: INFO: Pod "downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6" satisfied condition "Succeeded or Failed" May 5 00:05:55.857: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6 container client-container: STEP: delete the pod May 5 00:05:55.892: INFO: Waiting for pod downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6 to disappear May 5 00:05:55.899: INFO: Pod downwardapi-volume-b50d4d89-3472-4f29-9f5e-33f6a1c246d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:05:55.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-198" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:05:55.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:06:00.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9602" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1267,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:06:00.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:08:00.117: INFO: Deleting pod "var-expansion-2ee2736c-1035-4ccf-9ca6-3bb9243cdfcd" in namespace "var-expansion-6761" May 5 00:08:00.122: INFO: Wait up to 5m0s for pod "var-expansion-2ee2736c-1035-4ccf-9ca6-3bb9243cdfcd" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:08:02.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6761" for this suite. • [SLOW TEST:122.192 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":79,"skipped":1272,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:08:02.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:08:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2464" for this suite. STEP: Destroying namespace "nsdeletetest-1491" for this suite. May 5 00:08:33.542: INFO: Namespace nsdeletetest-1491 was already deleted STEP: Destroying namespace "nsdeletetest-3659" for this suite. • [SLOW TEST:31.347 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":80,"skipped":1279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:08:33.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 5 00:08:38.178: INFO: Successfully updated pod "pod-update-033e7b4a-0939-4453-b61b-119eb119eb75" STEP: verifying the updated pod is in kubernetes May 5 00:08:38.201: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:08:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8346" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1317,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:08:38.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 5 00:08:38.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1519983 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 5 00:08:38.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1519984 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 5 00:08:38.319: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1519985 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 5 00:08:48.371: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1520035 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 5 00:08:48.371: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1520036 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 5 00:08:48.371: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7980 /api/v1/namespaces/watch-7980/configmaps/e2e-watch-test-label-changed a72d9484-d6cd-4d63-a7a5-f197927db742 1520037 0 2020-05-05 00:08:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-05 00:08:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:08:48.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7980" for this suite. • [SLOW TEST:10.197 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":82,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:08:48.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 5 00:08:49.165: INFO: Pod name wrapped-volume-race-b93e13d2-ca6c-4b58-8f46-cc3d13f03f63: Found 0 pods out of 5 May 5 00:08:54.191: INFO: Pod name wrapped-volume-race-b93e13d2-ca6c-4b58-8f46-cc3d13f03f63: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b93e13d2-ca6c-4b58-8f46-cc3d13f03f63 in namespace emptydir-wrapper-8245, will wait for the garbage collector to delete the pods May 5 00:09:08.299: INFO: Deleting ReplicationController wrapped-volume-race-b93e13d2-ca6c-4b58-8f46-cc3d13f03f63 took: 32.557835ms May 5 00:09:08.699: INFO: Terminating ReplicationController wrapped-volume-race-b93e13d2-ca6c-4b58-8f46-cc3d13f03f63 pods took: 400.280063ms STEP: Creating RC which spawns configmap-volume pods May 5 00:09:25.066: INFO: Pod name wrapped-volume-race-ba071e2d-1ddf-40ca-a6d1-8ea1ff3a690f: Found 0 pods out of 5 May 5 00:09:31.397: INFO: Pod name wrapped-volume-race-ba071e2d-1ddf-40ca-a6d1-8ea1ff3a690f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ba071e2d-1ddf-40ca-a6d1-8ea1ff3a690f in namespace emptydir-wrapper-8245, will wait for the garbage collector to delete the pods May 5 00:09:45.543: INFO: Deleting ReplicationController wrapped-volume-race-ba071e2d-1ddf-40ca-a6d1-8ea1ff3a690f took: 27.37687ms May 5 00:09:45.843: INFO: Terminating ReplicationController wrapped-volume-race-ba071e2d-1ddf-40ca-a6d1-8ea1ff3a690f pods took: 300.315046ms STEP: Creating RC which spawns configmap-volume pods May 5 00:09:55.606: INFO: Pod name wrapped-volume-race-46e0ffbf-27de-4bfd-9caa-843b389a70b4: Found 0 pods out of 5 May 5 00:10:00.615: INFO: Pod name wrapped-volume-race-46e0ffbf-27de-4bfd-9caa-843b389a70b4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-46e0ffbf-27de-4bfd-9caa-843b389a70b4 in namespace emptydir-wrapper-8245, will wait for the garbage collector to delete the pods May 5 00:10:16.706: INFO: Deleting ReplicationController wrapped-volume-race-46e0ffbf-27de-4bfd-9caa-843b389a70b4 took: 8.141337ms May 5 00:10:17.006: INFO: Terminating ReplicationController wrapped-volume-race-46e0ffbf-27de-4bfd-9caa-843b389a70b4 pods took: 300.268521ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8245" for this suite. • [SLOW TEST:97.774 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":83,"skipped":1346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:26.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:10:26.229: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 00:10:29.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7130 create -f -' May 5 00:10:32.475: INFO: stderr: "" May 5 00:10:32.475: INFO: stdout: "e2e-test-crd-publish-openapi-3645-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 00:10:32.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7130 delete e2e-test-crd-publish-openapi-3645-crds test-cr' May 5 00:10:32.592: INFO: stderr: "" May 5 00:10:32.592: INFO: stdout: "e2e-test-crd-publish-openapi-3645-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 5 00:10:32.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7130 apply -f -' May 5 00:10:32.926: INFO: stderr: "" May 5 00:10:32.926: INFO: stdout: "e2e-test-crd-publish-openapi-3645-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 00:10:32.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7130 delete e2e-test-crd-publish-openapi-3645-crds test-cr' May 5 00:10:33.073: INFO: stderr: "" May 5 00:10:33.073: INFO: stdout: "e2e-test-crd-publish-openapi-3645-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 5 00:10:33.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3645-crds' May 5 00:10:33.328: INFO: stderr: "" May 5 00:10:33.328: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3645-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:35.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7130" for this suite. • [SLOW TEST:9.118 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":84,"skipped":1371,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:35.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:10:35.464: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"93f2c991-5d0f-4a68-81ce-2da2ac988532", Controller:(*bool)(0xc0033cb382), BlockOwnerDeletion:(*bool)(0xc0033cb383)}} May 5 00:10:35.504: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7b06a8be-4134-47f2-857f-77a1222aa3e8", Controller:(*bool)(0xc0033cb5f2), BlockOwnerDeletion:(*bool)(0xc0033cb5f3)}} May 5 00:10:35.576: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"de84a36f-0a36-460d-b434-3f5beb70aac1", Controller:(*bool)(0xc003432536), BlockOwnerDeletion:(*bool)(0xc003432537)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:40.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-482" for this suite. • [SLOW TEST:5.298 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":85,"skipped":1386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:40.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-51ea9f46-5a69-4e62-83d1-1c1c5c8f9d5d STEP: Creating a pod to test consume configMaps May 5 00:10:40.734: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4" in namespace "projected-3497" to be "Succeeded or Failed" May 5 00:10:40.753: INFO: Pod "pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.914799ms May 5 00:10:42.776: INFO: Pod "pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041740727s May 5 00:10:44.780: INFO: Pod "pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045688899s STEP: Saw pod success May 5 00:10:44.780: INFO: Pod "pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4" satisfied condition "Succeeded or Failed" May 5 00:10:44.783: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4 container projected-configmap-volume-test: STEP: delete the pod May 5 00:10:44.859: INFO: Waiting for pod pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4 to disappear May 5 00:10:44.870: INFO: Pod pod-projected-configmaps-7e4f900d-6846-4c39-8163-7abfaad68dd4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:44.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3497" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1410,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:44.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 5 00:10:49.002: INFO: &Pod{ObjectMeta:{send-events-913a6108-58f0-42b0-9255-44f325c480ae events-9530 /api/v1/namespaces/events-9530/pods/send-events-913a6108-58f0-42b0-9255-44f325c480ae e8e4cb6f-3949-4a51-ac59-780cb66c21f9 1521316 0 2020-05-05 00:10:44 +0000 UTC map[name:foo time:932930545] map[] [] [] [{e2e.test Update v1 2020-05-05 00:10:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 00:10:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lv9dx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lv9dx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lv9dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:10:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:10:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.58,StartTime:2020-05-05 00:10:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 00:10:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://a7c329d2b3094ecae92325fefb3b1c6a122a931660d10f69d767f1a1ced4128a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 5 00:10:51.008: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 5 00:10:53.012: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:53.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9530" for this suite. • [SLOW TEST:8.180 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":87,"skipped":1414,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:53.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 5 00:10:53.145: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 00:10:53.165: INFO: Waiting for terminating namespaces to be deleted... May 5 00:10:53.168: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 5 00:10:53.174: INFO: send-events-913a6108-58f0-42b0-9255-44f325c480ae from events-9530 started at 2020-05-05 00:10:45 +0000 UTC (1 container statuses recorded) May 5 00:10:53.174: INFO: Container p ready: true, restart count 0 May 5 00:10:53.174: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:10:53.174: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:10:53.174: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:10:53.174: INFO: Container kube-proxy ready: true, restart count 0 May 5 00:10:53.174: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 5 00:10:53.179: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:10:53.179: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:10:53.179: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:10:53.179: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160bf958c4423f8c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:10:54.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3790" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":88,"skipped":1425,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:10:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3447 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3447 STEP: creating replication controller externalsvc in namespace services-3447 I0505 00:10:54.456401 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3447, replica count: 2 I0505 00:10:57.506817 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:11:00.507055 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 5 00:11:00.567: INFO: Creating new exec pod May 5 00:11:04.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3447 execpodz7rq5 -- /bin/sh -x -c nslookup nodeport-service' May 5 00:11:04.897: INFO: stderr: "I0505 00:11:04.755300 1413 log.go:172] (0xc000ab9290) (0xc0005541e0) Create stream\nI0505 00:11:04.755367 1413 log.go:172] (0xc000ab9290) (0xc0005541e0) Stream added, broadcasting: 1\nI0505 00:11:04.759410 1413 log.go:172] (0xc000ab9290) Reply frame received for 1\nI0505 00:11:04.759471 1413 log.go:172] (0xc000ab9290) (0xc000250f00) Create stream\nI0505 00:11:04.759489 1413 log.go:172] (0xc000ab9290) (0xc000250f00) Stream added, broadcasting: 3\nI0505 00:11:04.760408 1413 log.go:172] (0xc000ab9290) Reply frame received for 3\nI0505 00:11:04.760455 1413 log.go:172] (0xc000ab9290) (0xc0004c65a0) Create stream\nI0505 00:11:04.760483 1413 log.go:172] (0xc000ab9290) (0xc0004c65a0) Stream added, broadcasting: 5\nI0505 00:11:04.761371 1413 log.go:172] (0xc000ab9290) Reply frame received for 5\nI0505 00:11:04.882678 1413 log.go:172] (0xc000ab9290) Data frame received for 5\nI0505 00:11:04.882714 1413 log.go:172] (0xc0004c65a0) (5) Data frame handling\nI0505 00:11:04.882738 1413 log.go:172] (0xc0004c65a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0505 00:11:04.890288 1413 log.go:172] (0xc000ab9290) Data frame received for 3\nI0505 00:11:04.890312 1413 log.go:172] (0xc000250f00) (3) Data frame handling\nI0505 00:11:04.890330 1413 log.go:172] (0xc000250f00) (3) Data frame sent\nI0505 00:11:04.891040 1413 log.go:172] (0xc000ab9290) Data frame received for 3\nI0505 00:11:04.891075 1413 log.go:172] (0xc000250f00) (3) Data frame handling\nI0505 00:11:04.891121 1413 log.go:172] (0xc000250f00) (3) Data frame sent\nI0505 00:11:04.891415 1413 log.go:172] (0xc000ab9290) Data frame received for 5\nI0505 00:11:04.891432 1413 log.go:172] (0xc0004c65a0) (5) Data frame handling\nI0505 00:11:04.891444 1413 log.go:172] (0xc000ab9290) Data frame received for 3\nI0505 00:11:04.891448 1413 log.go:172] (0xc000250f00) (3) Data frame handling\nI0505 00:11:04.893103 1413 log.go:172] (0xc000ab9290) Data frame received for 1\nI0505 00:11:04.893178 1413 log.go:172] (0xc0005541e0) (1) Data frame handling\nI0505 00:11:04.893186 1413 log.go:172] (0xc0005541e0) (1) Data frame sent\nI0505 00:11:04.893333 1413 log.go:172] (0xc000ab9290) (0xc0005541e0) Stream removed, broadcasting: 1\nI0505 00:11:04.893576 1413 log.go:172] (0xc000ab9290) Go away received\nI0505 00:11:04.893631 1413 log.go:172] (0xc000ab9290) (0xc0005541e0) Stream removed, broadcasting: 1\nI0505 00:11:04.893647 1413 log.go:172] (0xc000ab9290) (0xc000250f00) Stream removed, broadcasting: 3\nI0505 00:11:04.893656 1413 log.go:172] (0xc000ab9290) (0xc0004c65a0) Stream removed, broadcasting: 5\n" May 5 00:11:04.898: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3447.svc.cluster.local\tcanonical name = externalsvc.services-3447.svc.cluster.local.\nName:\texternalsvc.services-3447.svc.cluster.local\nAddress: 10.108.28.205\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3447, will wait for the garbage collector to delete the pods May 5 00:11:04.956: INFO: Deleting ReplicationController externalsvc took: 5.728889ms May 5 00:11:05.256: INFO: Terminating ReplicationController externalsvc pods took: 300.233149ms May 5 00:11:15.399: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:15.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3447" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.251 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":89,"skipped":1434,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:15.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:15.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8684" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":90,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:15.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 5 00:11:15.677: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5965" to be "Succeeded or Failed" May 5 00:11:15.687: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.315047ms May 5 00:11:17.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144825647s May 5 00:11:19.827: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149590728s May 5 00:11:21.831: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153430005s STEP: Saw pod success May 5 00:11:21.831: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 5 00:11:21.833: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 5 00:11:21.879: INFO: Waiting for pod pod-host-path-test to disappear May 5 00:11:21.891: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:21.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5965" for this suite. • [SLOW TEST:6.308 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1472,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:21.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 5 00:11:22.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8784' May 5 00:11:22.415: INFO: stderr: "" May 5 00:11:22.415: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 00:11:22.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8784' May 5 00:11:22.575: INFO: stderr: "" May 5 00:11:22.576: INFO: stdout: "update-demo-nautilus-nbtwq update-demo-nautilus-nncb5 " May 5 00:11:22.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbtwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8784' May 5 00:11:22.676: INFO: stderr: "" May 5 00:11:22.676: INFO: stdout: "" May 5 00:11:22.676: INFO: update-demo-nautilus-nbtwq is created but not running May 5 00:11:27.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8784' May 5 00:11:27.787: INFO: stderr: "" May 5 00:11:27.787: INFO: stdout: "update-demo-nautilus-nbtwq update-demo-nautilus-nncb5 " May 5 00:11:27.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbtwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8784' May 5 00:11:27.887: INFO: stderr: "" May 5 00:11:27.887: INFO: stdout: "true" May 5 00:11:27.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbtwq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8784' May 5 00:11:27.980: INFO: stderr: "" May 5 00:11:27.980: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 00:11:27.980: INFO: validating pod update-demo-nautilus-nbtwq May 5 00:11:27.985: INFO: got data: { "image": "nautilus.jpg" } May 5 00:11:27.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 00:11:27.985: INFO: update-demo-nautilus-nbtwq is verified up and running May 5 00:11:27.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nncb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8784' May 5 00:11:28.090: INFO: stderr: "" May 5 00:11:28.091: INFO: stdout: "true" May 5 00:11:28.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nncb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8784' May 5 00:11:28.188: INFO: stderr: "" May 5 00:11:28.188: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 00:11:28.188: INFO: validating pod update-demo-nautilus-nncb5 May 5 00:11:28.192: INFO: got data: { "image": "nautilus.jpg" } May 5 00:11:28.192: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 00:11:28.192: INFO: update-demo-nautilus-nncb5 is verified up and running STEP: using delete to clean up resources May 5 00:11:28.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8784' May 5 00:11:28.302: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:11:28.302: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 5 00:11:28.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8784' May 5 00:11:28.395: INFO: stderr: "No resources found in kubectl-8784 namespace.\n" May 5 00:11:28.395: INFO: stdout: "" May 5 00:11:28.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8784 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 00:11:28.501: INFO: stderr: "" May 5 00:11:28.501: INFO: stdout: "update-demo-nautilus-nbtwq\nupdate-demo-nautilus-nncb5\n" May 5 00:11:29.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8784' May 5 00:11:29.181: INFO: stderr: "No resources found in kubectl-8784 namespace.\n" May 5 00:11:29.181: INFO: stdout: "" May 5 00:11:29.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8784 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 00:11:29.290: INFO: stderr: "" May 5 00:11:29.290: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:29.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8784" for this suite. • [SLOW TEST:7.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":92,"skipped":1487,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:29.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:11:29.869: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c5e0c2e6-df11-4a30-a0ee-3bb856cfca69" in namespace "security-context-test-3323" to be "Succeeded or Failed" May 5 00:11:29.978: INFO: Pod "alpine-nnp-false-c5e0c2e6-df11-4a30-a0ee-3bb856cfca69": Phase="Pending", Reason="", readiness=false. Elapsed: 108.671753ms May 5 00:11:31.981: INFO: Pod "alpine-nnp-false-c5e0c2e6-df11-4a30-a0ee-3bb856cfca69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112033765s May 5 00:11:33.986: INFO: Pod "alpine-nnp-false-c5e0c2e6-df11-4a30-a0ee-3bb856cfca69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116148076s May 5 00:11:33.986: INFO: Pod "alpine-nnp-false-c5e0c2e6-df11-4a30-a0ee-3bb856cfca69" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:33.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3323" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:34.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 5 00:11:34.106: INFO: Waiting up to 5m0s for pod "var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a" in namespace "var-expansion-2538" to be "Succeeded or Failed" May 5 00:11:34.112: INFO: Pod "var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.99274ms May 5 00:11:36.637: INFO: Pod "var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530898087s May 5 00:11:38.642: INFO: Pod "var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.535330621s STEP: Saw pod success May 5 00:11:38.642: INFO: Pod "var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a" satisfied condition "Succeeded or Failed" May 5 00:11:38.645: INFO: Trying to get logs from node latest-worker2 pod var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a container dapi-container: STEP: delete the pod May 5 00:11:38.746: INFO: Waiting for pod var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a to disappear May 5 00:11:39.020: INFO: Pod var-expansion-84e9c656-e1f1-4024-96a0-e315af4bd14a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:39.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2538" for this suite. • [SLOW TEST:5.057 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":94,"skipped":1542,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:39.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:11:39.531: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:11:41.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234299, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234299, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234299, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234299, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:11:44.608: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2887" for this suite. STEP: Destroying namespace "webhook-2887-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.667 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":95,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:44.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:11:44.863: INFO: Creating deployment "test-recreate-deployment" May 5 00:11:44.874: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 5 00:11:44.886: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 5 00:11:46.893: INFO: Waiting deployment "test-recreate-deployment" to complete May 5 00:11:46.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234304, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234304, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234305, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234304, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:11:48.900: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 5 00:11:48.907: INFO: Updating deployment test-recreate-deployment May 5 00:11:48.907: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 5 00:11:49.731: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6727 /apis/apps/v1/namespaces/deployment-6727/deployments/test-recreate-deployment 97375980-5c27-4714-8a97-4cd01fb8f65e 1521870 2 2020-05-05 00:11:44 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-05 00:11:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-05 00:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038986c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-05 00:11:49 +0000 UTC,LastTransitionTime:2020-05-05 00:11:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-05 00:11:49 +0000 UTC,LastTransitionTime:2020-05-05 00:11:44 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 5 00:11:49.735: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-6727 /apis/apps/v1/namespaces/deployment-6727/replicasets/test-recreate-deployment-d5667d9c7 da0e88d7-6a48-44df-b99b-6db8e07a0a2e 1521867 1 2020-05-05 00:11:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 97375980-5c27-4714-8a97-4cd01fb8f65e 0xc0037b5cc0 0xc0037b5cc1}] [] [{kube-controller-manager Update apps/v1 2020-05-05 00:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97375980-5c27-4714-8a97-4cd01fb8f65e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037b5d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 00:11:49.735: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 5 00:11:49.736: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-6727 /apis/apps/v1/namespaces/deployment-6727/replicasets/test-recreate-deployment-6d65b9f6d8 5e25d5be-70d5-4fd1-8098-15052fb4ae1f 1521856 2 2020-05-05 00:11:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 97375980-5c27-4714-8a97-4cd01fb8f65e 0xc0037b5b57 0xc0037b5b58}] [] [{kube-controller-manager Update apps/v1 2020-05-05 00:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97375980-5c27-4714-8a97-4cd01fb8f65e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037b5c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 00:11:49.743: INFO: Pod "test-recreate-deployment-d5667d9c7-mc27n" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-mc27n test-recreate-deployment-d5667d9c7- deployment-6727 /api/v1/namespaces/deployment-6727/pods/test-recreate-deployment-d5667d9c7-mc27n 6ffdab0c-fad2-49ae-9416-c63fcbaafb18 1521868 0 2020-05-05 00:11:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 da0e88d7-6a48-44df-b99b-6db8e07a0a2e 0xc0038f6360 0xc0038f6361}] [] [{kube-controller-manager Update v1 2020-05-05 00:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da0e88d7-6a48-44df-b99b-6db8e07a0a2e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 00:11:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-62vb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-62vb9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-62vb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 00:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:11:49.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6727" for this suite. • [SLOW TEST:5.031 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":96,"skipped":1604,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:11:49.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5874 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 5 00:11:50.374: INFO: Found 0 stateful pods, waiting for 3 May 5 00:12:00.476: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:00.476: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:00.476: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 5 00:12:10.379: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:10.379: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:10.379: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 5 00:12:10.475: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 5 00:12:20.537: INFO: Updating stateful set ss2 May 5 00:12:20.625: INFO: Waiting for Pod statefulset-5874/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 5 00:12:31.171: INFO: Found 2 stateful pods, waiting for 3 May 5 00:12:41.177: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:41.177: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:12:41.177: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 5 00:12:41.203: INFO: Updating stateful set ss2 May 5 00:12:41.267: INFO: Waiting for Pod statefulset-5874/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 00:12:51.275: INFO: Waiting for Pod statefulset-5874/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 00:13:01.293: INFO: Updating stateful set ss2 May 5 00:13:01.320: INFO: Waiting for StatefulSet statefulset-5874/ss2 to complete update May 5 00:13:01.320: INFO: Waiting for Pod statefulset-5874/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 00:13:11.329: INFO: Waiting for StatefulSet statefulset-5874/ss2 to complete update May 5 00:13:11.329: INFO: Waiting for Pod statefulset-5874/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 00:13:21.356: INFO: Deleting all statefulset in ns statefulset-5874 May 5 00:13:21.359: INFO: Scaling statefulset ss2 to 0 May 5 00:14:01.391: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:14:01.394: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:14:01.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5874" for this suite. • [SLOW TEST:131.658 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":97,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:14:01.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6833 STEP: creating service affinity-nodeport in namespace services-6833 STEP: creating replication controller affinity-nodeport in namespace services-6833 I0505 00:14:01.647868 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6833, replica count: 3 I0505 00:14:04.698303 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:14:07.698554 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:14:07.739: INFO: Creating new exec pod May 5 00:14:12.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6833 execpod-affinitypjtvb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 5 00:14:13.003: INFO: stderr: "I0505 00:14:12.907267 1701 log.go:172] (0xc00044cd10) (0xc00064bd60) Create stream\nI0505 00:14:12.907334 1701 log.go:172] (0xc00044cd10) (0xc00064bd60) Stream added, broadcasting: 1\nI0505 00:14:12.916042 1701 log.go:172] (0xc00044cd10) Reply frame received for 1\nI0505 00:14:12.916107 1701 log.go:172] (0xc00044cd10) (0xc000658b40) Create stream\nI0505 00:14:12.916134 1701 log.go:172] (0xc00044cd10) (0xc000658b40) Stream added, broadcasting: 3\nI0505 00:14:12.918841 1701 log.go:172] (0xc00044cd10) Reply frame received for 3\nI0505 00:14:12.918890 1701 log.go:172] (0xc00044cd10) (0xc0006fc6e0) Create stream\nI0505 00:14:12.918900 1701 log.go:172] (0xc00044cd10) (0xc0006fc6e0) Stream added, broadcasting: 5\nI0505 00:14:12.920555 1701 log.go:172] (0xc00044cd10) Reply frame received for 5\nI0505 00:14:12.996560 1701 log.go:172] (0xc00044cd10) Data frame received for 5\nI0505 00:14:12.996601 1701 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0505 00:14:12.996636 1701 log.go:172] (0xc0006fc6e0) (5) Data frame sent\nI0505 00:14:12.996653 1701 log.go:172] (0xc00044cd10) Data frame received for 5\nI0505 00:14:12.996663 1701 log.go:172] (0xc0006fc6e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0505 00:14:12.996723 1701 log.go:172] (0xc0006fc6e0) (5) Data frame sent\nI0505 00:14:12.997093 1701 log.go:172] (0xc00044cd10) Data frame received for 5\nI0505 00:14:12.997313 1701 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0505 00:14:12.997378 1701 log.go:172] (0xc00044cd10) Data frame received for 3\nI0505 00:14:12.997419 1701 log.go:172] (0xc000658b40) (3) Data frame handling\nI0505 00:14:12.999231 1701 log.go:172] (0xc00044cd10) Data frame received for 1\nI0505 00:14:12.999257 1701 log.go:172] (0xc00064bd60) (1) Data frame handling\nI0505 00:14:12.999281 1701 log.go:172] (0xc00064bd60) (1) Data frame sent\nI0505 00:14:12.999310 1701 log.go:172] (0xc00044cd10) (0xc00064bd60) Stream removed, broadcasting: 1\nI0505 00:14:12.999469 1701 log.go:172] (0xc00044cd10) Go away received\nI0505 00:14:12.999649 1701 log.go:172] (0xc00044cd10) (0xc00064bd60) Stream removed, broadcasting: 1\nI0505 00:14:12.999668 1701 log.go:172] (0xc00044cd10) (0xc000658b40) Stream removed, broadcasting: 3\nI0505 00:14:12.999678 1701 log.go:172] (0xc00044cd10) (0xc0006fc6e0) Stream removed, broadcasting: 5\n" May 5 00:14:13.003: INFO: stdout: "" May 5 00:14:13.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6833 execpod-affinitypjtvb -- /bin/sh -x -c nc -zv -t -w 2 10.109.236.142 80' May 5 00:14:13.210: INFO: stderr: "I0505 00:14:13.140876 1721 log.go:172] (0xc0009b6dc0) (0xc00092a3c0) Create stream\nI0505 00:14:13.140936 1721 log.go:172] (0xc0009b6dc0) (0xc00092a3c0) Stream added, broadcasting: 1\nI0505 00:14:13.145797 1721 log.go:172] (0xc0009b6dc0) Reply frame received for 1\nI0505 00:14:13.145844 1721 log.go:172] (0xc0009b6dc0) (0xc0006c0500) Create stream\nI0505 00:14:13.145854 1721 log.go:172] (0xc0009b6dc0) (0xc0006c0500) Stream added, broadcasting: 3\nI0505 00:14:13.147020 1721 log.go:172] (0xc0009b6dc0) Reply frame received for 3\nI0505 00:14:13.147085 1721 log.go:172] (0xc0009b6dc0) (0xc000624140) Create stream\nI0505 00:14:13.147112 1721 log.go:172] (0xc0009b6dc0) (0xc000624140) Stream added, broadcasting: 5\nI0505 00:14:13.148290 1721 log.go:172] (0xc0009b6dc0) Reply frame received for 5\nI0505 00:14:13.201893 1721 log.go:172] (0xc0009b6dc0) Data frame received for 3\nI0505 00:14:13.201943 1721 log.go:172] (0xc0006c0500) (3) Data frame handling\nI0505 00:14:13.201973 1721 log.go:172] (0xc0009b6dc0) Data frame received for 5\nI0505 00:14:13.202007 1721 log.go:172] (0xc000624140) (5) Data frame handling\nI0505 00:14:13.202028 1721 log.go:172] (0xc000624140) (5) Data frame sent\nI0505 00:14:13.202045 1721 log.go:172] (0xc0009b6dc0) Data frame received for 5\nI0505 00:14:13.202057 1721 log.go:172] (0xc000624140) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.236.142 80\nConnection to 10.109.236.142 80 port [tcp/http] succeeded!\nI0505 00:14:13.203550 1721 log.go:172] (0xc0009b6dc0) Data frame received for 1\nI0505 00:14:13.203582 1721 log.go:172] (0xc00092a3c0) (1) Data frame handling\nI0505 00:14:13.203612 1721 log.go:172] (0xc00092a3c0) (1) Data frame sent\nI0505 00:14:13.203740 1721 log.go:172] (0xc0009b6dc0) (0xc00092a3c0) Stream removed, broadcasting: 1\nI0505 00:14:13.203805 1721 log.go:172] (0xc0009b6dc0) Go away received\nI0505 00:14:13.204285 1721 log.go:172] (0xc0009b6dc0) (0xc00092a3c0) Stream removed, broadcasting: 1\nI0505 00:14:13.204313 1721 log.go:172] (0xc0009b6dc0) (0xc0006c0500) Stream removed, broadcasting: 3\nI0505 00:14:13.204327 1721 log.go:172] (0xc0009b6dc0) (0xc000624140) Stream removed, broadcasting: 5\n" May 5 00:14:13.210: INFO: stdout: "" May 5 00:14:13.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6833 execpod-affinitypjtvb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31181' May 5 00:14:13.416: INFO: stderr: "I0505 00:14:13.342683 1742 log.go:172] (0xc00095e840) (0xc000648fa0) Create stream\nI0505 00:14:13.342745 1742 log.go:172] (0xc00095e840) (0xc000648fa0) Stream added, broadcasting: 1\nI0505 00:14:13.345378 1742 log.go:172] (0xc00095e840) Reply frame received for 1\nI0505 00:14:13.345415 1742 log.go:172] (0xc00095e840) (0xc0005dc5a0) Create stream\nI0505 00:14:13.345427 1742 log.go:172] (0xc00095e840) (0xc0005dc5a0) Stream added, broadcasting: 3\nI0505 00:14:13.346385 1742 log.go:172] (0xc00095e840) Reply frame received for 3\nI0505 00:14:13.346434 1742 log.go:172] (0xc00095e840) (0xc000428500) Create stream\nI0505 00:14:13.346454 1742 log.go:172] (0xc00095e840) (0xc000428500) Stream added, broadcasting: 5\nI0505 00:14:13.347629 1742 log.go:172] (0xc00095e840) Reply frame received for 5\nI0505 00:14:13.408980 1742 log.go:172] (0xc00095e840) Data frame received for 3\nI0505 00:14:13.409004 1742 log.go:172] (0xc0005dc5a0) (3) Data frame handling\nI0505 00:14:13.409042 1742 log.go:172] (0xc00095e840) Data frame received for 5\nI0505 00:14:13.409066 1742 log.go:172] (0xc000428500) (5) Data frame handling\nI0505 00:14:13.409095 1742 log.go:172] (0xc000428500) (5) Data frame sent\nI0505 00:14:13.409331 1742 log.go:172] (0xc00095e840) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 31181\nConnection to 172.17.0.13 31181 port [tcp/31181] succeeded!\nI0505 00:14:13.409360 1742 log.go:172] (0xc000428500) (5) Data frame handling\nI0505 00:14:13.410991 1742 log.go:172] (0xc00095e840) Data frame received for 1\nI0505 00:14:13.411021 1742 log.go:172] (0xc000648fa0) (1) Data frame handling\nI0505 00:14:13.411055 1742 log.go:172] (0xc000648fa0) (1) Data frame sent\nI0505 00:14:13.411082 1742 log.go:172] (0xc00095e840) (0xc000648fa0) Stream removed, broadcasting: 1\nI0505 00:14:13.411101 1742 log.go:172] (0xc00095e840) Go away received\nI0505 00:14:13.411580 1742 log.go:172] (0xc00095e840) (0xc000648fa0) Stream removed, broadcasting: 1\nI0505 00:14:13.411602 1742 log.go:172] (0xc00095e840) (0xc0005dc5a0) Stream removed, broadcasting: 3\nI0505 00:14:13.411616 1742 log.go:172] (0xc00095e840) (0xc000428500) Stream removed, broadcasting: 5\n" May 5 00:14:13.417: INFO: stdout: "" May 5 00:14:13.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6833 execpod-affinitypjtvb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31181' May 5 00:14:13.625: INFO: stderr: "I0505 00:14:13.547386 1764 log.go:172] (0xc00003ac60) (0xc000424e60) Create stream\nI0505 00:14:13.547474 1764 log.go:172] (0xc00003ac60) (0xc000424e60) Stream added, broadcasting: 1\nI0505 00:14:13.550909 1764 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0505 00:14:13.550959 1764 log.go:172] (0xc00003ac60) (0xc000238140) Create stream\nI0505 00:14:13.550978 1764 log.go:172] (0xc00003ac60) (0xc000238140) Stream added, broadcasting: 3\nI0505 00:14:13.551979 1764 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0505 00:14:13.552033 1764 log.go:172] (0xc00003ac60) (0xc0006c4dc0) Create stream\nI0505 00:14:13.552054 1764 log.go:172] (0xc00003ac60) (0xc0006c4dc0) Stream added, broadcasting: 5\nI0505 00:14:13.552896 1764 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0505 00:14:13.617626 1764 log.go:172] (0xc00003ac60) Data frame received for 5\nI0505 00:14:13.617660 1764 log.go:172] (0xc0006c4dc0) (5) Data frame handling\nI0505 00:14:13.617686 1764 log.go:172] (0xc0006c4dc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31181\nConnection to 172.17.0.12 31181 port [tcp/31181] succeeded!\nI0505 00:14:13.618272 1764 log.go:172] (0xc00003ac60) Data frame received for 3\nI0505 00:14:13.618291 1764 log.go:172] (0xc000238140) (3) Data frame handling\nI0505 00:14:13.618317 1764 log.go:172] (0xc00003ac60) Data frame received for 5\nI0505 00:14:13.618339 1764 log.go:172] (0xc0006c4dc0) (5) Data frame handling\nI0505 00:14:13.620260 1764 log.go:172] (0xc00003ac60) Data frame received for 1\nI0505 00:14:13.620279 1764 log.go:172] (0xc000424e60) (1) Data frame handling\nI0505 00:14:13.620290 1764 log.go:172] (0xc000424e60) (1) Data frame sent\nI0505 00:14:13.620305 1764 log.go:172] (0xc00003ac60) (0xc000424e60) Stream removed, broadcasting: 1\nI0505 00:14:13.620370 1764 log.go:172] (0xc00003ac60) Go away received\nI0505 00:14:13.620669 1764 log.go:172] (0xc00003ac60) (0xc000424e60) Stream removed, broadcasting: 1\nI0505 00:14:13.620688 1764 log.go:172] (0xc00003ac60) (0xc000238140) Stream removed, broadcasting: 3\nI0505 00:14:13.620698 1764 log.go:172] (0xc00003ac60) (0xc0006c4dc0) Stream removed, broadcasting: 5\n" May 5 00:14:13.625: INFO: stdout: "" May 5 00:14:13.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6833 execpod-affinitypjtvb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31181/ ; done' May 5 00:14:13.919: INFO: stderr: "I0505 00:14:13.765314 1787 log.go:172] (0xc00003ad10) (0xc000556640) Create stream\nI0505 00:14:13.765388 1787 log.go:172] (0xc00003ad10) (0xc000556640) Stream added, broadcasting: 1\nI0505 00:14:13.767843 1787 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0505 00:14:13.767891 1787 log.go:172] (0xc00003ad10) (0xc000557a40) Create stream\nI0505 00:14:13.767911 1787 log.go:172] (0xc00003ad10) (0xc000557a40) Stream added, broadcasting: 3\nI0505 00:14:13.768680 1787 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0505 00:14:13.768726 1787 log.go:172] (0xc00003ad10) (0xc000557cc0) Create stream\nI0505 00:14:13.768738 1787 log.go:172] (0xc00003ad10) (0xc000557cc0) Stream added, broadcasting: 5\nI0505 00:14:13.769627 1787 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0505 00:14:13.833887 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.833927 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.833944 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.833966 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.833978 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.834000 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.841747 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.841766 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.841776 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.842487 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.842508 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.842515 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.842524 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.842531 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.842537 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.848459 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.848482 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.848498 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.849025 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.849038 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.849045 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.849049 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.849054 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.849065 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.849072 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.849080 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.849087 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.854111 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.854130 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.854140 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.854803 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.854826 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.854839 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.854846 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.854851 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.854898 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.854953 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.854976 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.855013 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.858869 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.858886 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.858898 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.859269 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.859292 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.859316 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.859384 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.859399 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.859406 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.862983 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.862999 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.863014 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.863278 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.863298 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.863309 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.863353 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.863366 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.863377 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.867228 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.867240 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.867247 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.867566 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.867589 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.867625 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.867715 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.867736 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.867755 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.871084 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.871102 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.871118 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.871432 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.871464 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.871478 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.871492 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.871500 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.871510 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.874778 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.874806 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.874834 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.875150 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.875179 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.875206 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.875227 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.875249 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.875281 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.875317 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.875344 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.875355 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.878653 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.878677 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.878727 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.878828 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.878849 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.878876 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.878899 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.878916 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.878926 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.882476 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.882512 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.882542 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.883488 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.883517 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.883534 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.883563 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.883581 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.883597 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.883621 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.883639 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.883684 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.887110 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.887137 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.887160 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.887438 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.887454 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.887465 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.887473 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.887479 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.887491 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.887563 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.887583 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.887601 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.891489 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.891514 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.891533 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.891952 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.891988 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.892002 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.892020 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.892030 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.892041 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.892052 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.892062 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.892081 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.895795 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.895821 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.895843 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.896498 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.896509 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.896516 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.896533 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.896553 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.896570 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.896587 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.896597 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.896640 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.902317 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.902345 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.902378 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.902944 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.902973 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.902988 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.903019 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.903030 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.903039 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.903049 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.903059 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.903077 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\nI0505 00:14:13.907614 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.907641 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.907664 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.908129 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.908151 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.908160 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.908170 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.908185 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.908213 1787 log.go:172] (0xc000557cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31181/\nI0505 00:14:13.912420 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.912442 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.912461 1787 log.go:172] (0xc000557a40) (3) Data frame sent\nI0505 00:14:13.912950 1787 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:14:13.912969 1787 log.go:172] (0xc000557cc0) (5) Data frame handling\nI0505 00:14:13.913275 1787 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:14:13.913289 1787 log.go:172] (0xc000557a40) (3) Data frame handling\nI0505 00:14:13.914748 1787 log.go:172] (0xc00003ad10) Data frame received for 1\nI0505 00:14:13.914766 1787 log.go:172] (0xc000556640) (1) Data frame handling\nI0505 00:14:13.914794 1787 log.go:172] (0xc000556640) (1) Data frame sent\nI0505 00:14:13.914816 1787 log.go:172] (0xc00003ad10) (0xc000556640) Stream removed, broadcasting: 1\nI0505 00:14:13.914913 1787 log.go:172] (0xc00003ad10) Go away received\nI0505 00:14:13.915171 1787 log.go:172] (0xc00003ad10) (0xc000556640) Stream removed, broadcasting: 1\nI0505 00:14:13.915194 1787 log.go:172] (0xc00003ad10) (0xc000557a40) Stream removed, broadcasting: 3\nI0505 00:14:13.915203 1787 log.go:172] (0xc00003ad10) (0xc000557cc0) Stream removed, broadcasting: 5\n" May 5 00:14:13.920: INFO: stdout: "\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk\naffinity-nodeport-v68gk" May 5 00:14:13.920: INFO: Received response from host: May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Received response from host: affinity-nodeport-v68gk May 5 00:14:13.920: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6833, will wait for the garbage collector to delete the pods May 5 00:14:14.031: INFO: Deleting ReplicationController affinity-nodeport took: 6.769377ms May 5 00:14:14.333: INFO: Terminating ReplicationController affinity-nodeport pods took: 302.079583ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:14:25.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6833" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.591 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":98,"skipped":1636,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:14:25.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 5 00:14:33.234: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:33.238: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:35.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:35.242: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:37.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:37.242: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:39.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:39.242: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:41.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:41.243: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:43.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:43.243: INFO: Pod pod-with-poststart-exec-hook still exists May 5 00:14:45.238: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 00:14:45.242: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:14:45.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8990" for this suite. • [SLOW TEST:20.241 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:14:45.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:14:45.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46" in namespace "downward-api-9688" to be "Succeeded or Failed" May 5 00:14:45.352: INFO: Pod "downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46": Phase="Pending", Reason="", readiness=false. Elapsed: 16.171955ms May 5 00:14:47.356: INFO: Pod "downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02003768s May 5 00:14:50.578: INFO: Pod "downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.24197369s STEP: Saw pod success May 5 00:14:50.578: INFO: Pod "downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46" satisfied condition "Succeeded or Failed" May 5 00:14:50.815: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46 container client-container: STEP: delete the pod May 5 00:14:51.006: INFO: Waiting for pod downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46 to disappear May 5 00:14:51.018: INFO: Pod downwardapi-volume-56e7271d-5def-4e25-978b-79525f6c3c46 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:14:51.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9688" for this suite. • [SLOW TEST:5.824 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:14:51.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-108090d0-2ec8-40f4-9c66-2b618aa1baf6 STEP: Creating a pod to test consume secrets May 5 00:14:51.195: INFO: Waiting up to 5m0s for pod "pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436" in namespace "secrets-5722" to be "Succeeded or Failed" May 5 00:14:51.198: INFO: Pod "pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370983ms May 5 00:14:53.243: INFO: Pod "pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047943336s May 5 00:14:55.246: INFO: Pod "pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051480948s STEP: Saw pod success May 5 00:14:55.246: INFO: Pod "pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436" satisfied condition "Succeeded or Failed" May 5 00:14:55.249: INFO: Trying to get logs from node latest-worker pod pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436 container secret-volume-test: STEP: delete the pod May 5 00:14:55.352: INFO: Waiting for pod pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436 to disappear May 5 00:14:55.370: INFO: Pod pod-secrets-5ad6d872-dc73-4a87-a186-6087365eb436 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:14:55.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5722" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1713,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:14:55.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:14:57.179: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:14:59.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234497, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234497, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234497, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234497, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:15:02.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:15:02.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:15:03.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6511" for this suite. STEP: Destroying namespace "webhook-6511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.307 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":102,"skipped":1747,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:15:03.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:15:09.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6617" for this suite. STEP: Destroying namespace "nsdeletetest-2232" for this suite. May 5 00:15:10.000: INFO: Namespace nsdeletetest-2232 was already deleted STEP: Destroying namespace "nsdeletetest-1742" for this suite. • [SLOW TEST:6.321 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":103,"skipped":1769,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:15:10.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-a74fdae6-66f3-4eb4-b5e6-b85a678f120c STEP: Creating a pod to test consume secrets May 5 00:15:10.106: INFO: Waiting up to 5m0s for pod "pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e" in namespace "secrets-364" to be "Succeeded or Failed" May 5 00:15:10.115: INFO: Pod "pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.805707ms May 5 00:15:12.177: INFO: Pod "pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071357145s May 5 00:15:14.182: INFO: Pod "pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075996155s STEP: Saw pod success May 5 00:15:14.182: INFO: Pod "pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e" satisfied condition "Succeeded or Failed" May 5 00:15:14.185: INFO: Trying to get logs from node latest-worker pod pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e container secret-volume-test: STEP: delete the pod May 5 00:15:14.358: INFO: Waiting for pod pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e to disappear May 5 00:15:14.407: INFO: Pod pod-secrets-b46031b1-25fd-41bb-bd84-1384e36c989e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:15:14.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-364" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1780,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:15:14.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8d230dc8-a1b7-43b5-a666-21b1453bd7f5 STEP: Creating a pod to test consume configMaps May 5 00:15:14.622: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91" in namespace "projected-8831" to be "Succeeded or Failed" May 5 00:15:14.668: INFO: Pod "pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91": Phase="Pending", Reason="", readiness=false. Elapsed: 45.548464ms May 5 00:15:16.672: INFO: Pod "pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049191484s May 5 00:15:18.676: INFO: Pod "pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053920046s STEP: Saw pod success May 5 00:15:18.677: INFO: Pod "pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91" satisfied condition "Succeeded or Failed" May 5 00:15:18.680: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91 container projected-configmap-volume-test: STEP: delete the pod May 5 00:15:18.725: INFO: Waiting for pod pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91 to disappear May 5 00:15:18.731: INFO: Pod pod-projected-configmaps-22a97bb8-58ac-4eed-86a8-ed1374bb6d91 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:15:18.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8831" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1800,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:15:18.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-281.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-281.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-281.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-281.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-281.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.11.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.11.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.11.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.11.105_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-281.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-281.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-281.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-281.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-281.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-281.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-281.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.11.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.11.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.11.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.11.105_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:15:24.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:24.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:24.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:24.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:25.000: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:25.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:25.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:25.008: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:25.026: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:30.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.063: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.066: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.069: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:30.092: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:35.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.035: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.064: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.067: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.069: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.072: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:35.092: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:40.286: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.350: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.380: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.464: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.470: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:40.489: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:45.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.065: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.068: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.071: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.074: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:45.095: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:50.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.040: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.044: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.067: INFO: Unable to read jessie_udp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.070: INFO: Unable to read jessie_tcp@dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.074: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.077: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local from pod dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8: the server could not find the requested resource (get pods dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8) May 5 00:15:50.097: INFO: Lookups using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 failed for: [wheezy_udp@dns-test-service.dns-281.svc.cluster.local wheezy_tcp@dns-test-service.dns-281.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_udp@dns-test-service.dns-281.svc.cluster.local jessie_tcp@dns-test-service.dns-281.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-281.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-281.svc.cluster.local] May 5 00:15:55.096: INFO: DNS probes using dns-281/dns-test-8960e865-4a13-4fcf-911c-3c7dc53ff5d8 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:15:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-281" for this suite. • [SLOW TEST:37.178 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":106,"skipped":1825,"failed":0} S ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:15:55.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-748 STEP: creating service affinity-clusterip in namespace services-748 STEP: creating replication controller affinity-clusterip in namespace services-748 I0505 00:15:56.102845 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-748, replica count: 3 I0505 00:15:59.153226 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:16:02.153473 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:16:05.153722 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:16:05.160: INFO: Creating new exec pod May 5 00:16:10.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-748 execpod-affinityxps9n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 5 00:16:10.397: INFO: stderr: "I0505 00:16:10.306139 1809 log.go:172] (0xc00003adc0) (0xc00030ba40) Create stream\nI0505 00:16:10.306201 1809 log.go:172] (0xc00003adc0) (0xc00030ba40) Stream added, broadcasting: 1\nI0505 00:16:10.308550 1809 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0505 00:16:10.308606 1809 log.go:172] (0xc00003adc0) (0xc000138e60) Create stream\nI0505 00:16:10.308631 1809 log.go:172] (0xc00003adc0) (0xc000138e60) Stream added, broadcasting: 3\nI0505 00:16:10.309853 1809 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0505 00:16:10.309885 1809 log.go:172] (0xc00003adc0) (0xc0004aea00) Create stream\nI0505 00:16:10.309896 1809 log.go:172] (0xc00003adc0) (0xc0004aea00) Stream added, broadcasting: 5\nI0505 00:16:10.310897 1809 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0505 00:16:10.387702 1809 log.go:172] (0xc00003adc0) Data frame received for 5\nI0505 00:16:10.387724 1809 log.go:172] (0xc0004aea00) (5) Data frame handling\nI0505 00:16:10.387737 1809 log.go:172] (0xc0004aea00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0505 00:16:10.388580 1809 log.go:172] (0xc00003adc0) Data frame received for 5\nI0505 00:16:10.388611 1809 log.go:172] (0xc0004aea00) (5) Data frame handling\nI0505 00:16:10.388651 1809 log.go:172] (0xc0004aea00) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0505 00:16:10.389253 1809 log.go:172] (0xc00003adc0) Data frame received for 5\nI0505 00:16:10.389274 1809 log.go:172] (0xc0004aea00) (5) Data frame handling\nI0505 00:16:10.389298 1809 log.go:172] (0xc00003adc0) Data frame received for 3\nI0505 00:16:10.389304 1809 log.go:172] (0xc000138e60) (3) Data frame handling\nI0505 00:16:10.391383 1809 log.go:172] (0xc00003adc0) Data frame received for 1\nI0505 00:16:10.391408 1809 log.go:172] (0xc00030ba40) (1) Data frame handling\nI0505 00:16:10.391429 1809 log.go:172] (0xc00030ba40) (1) Data frame sent\nI0505 00:16:10.391463 1809 log.go:172] (0xc00003adc0) (0xc00030ba40) Stream removed, broadcasting: 1\nI0505 00:16:10.391484 1809 log.go:172] (0xc00003adc0) Go away received\nI0505 00:16:10.391969 1809 log.go:172] (0xc00003adc0) (0xc00030ba40) Stream removed, broadcasting: 1\nI0505 00:16:10.391994 1809 log.go:172] (0xc00003adc0) (0xc000138e60) Stream removed, broadcasting: 3\nI0505 00:16:10.392006 1809 log.go:172] (0xc00003adc0) (0xc0004aea00) Stream removed, broadcasting: 5\n" May 5 00:16:10.397: INFO: stdout: "" May 5 00:16:10.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-748 execpod-affinityxps9n -- /bin/sh -x -c nc -zv -t -w 2 10.96.33.3 80' May 5 00:16:10.603: INFO: stderr: "I0505 00:16:10.530476 1831 log.go:172] (0xc000af53f0) (0xc000611cc0) Create stream\nI0505 00:16:10.530541 1831 log.go:172] (0xc000af53f0) (0xc000611cc0) Stream added, broadcasting: 1\nI0505 00:16:10.534306 1831 log.go:172] (0xc000af53f0) Reply frame received for 1\nI0505 00:16:10.534356 1831 log.go:172] (0xc000af53f0) (0xc0006ac640) Create stream\nI0505 00:16:10.534367 1831 log.go:172] (0xc000af53f0) (0xc0006ac640) Stream added, broadcasting: 3\nI0505 00:16:10.535193 1831 log.go:172] (0xc000af53f0) Reply frame received for 3\nI0505 00:16:10.535222 1831 log.go:172] (0xc000af53f0) (0xc00060ce60) Create stream\nI0505 00:16:10.535232 1831 log.go:172] (0xc000af53f0) (0xc00060ce60) Stream added, broadcasting: 5\nI0505 00:16:10.535980 1831 log.go:172] (0xc000af53f0) Reply frame received for 5\nI0505 00:16:10.597085 1831 log.go:172] (0xc000af53f0) Data frame received for 3\nI0505 00:16:10.597234 1831 log.go:172] (0xc000af53f0) Data frame received for 5\nI0505 00:16:10.597257 1831 log.go:172] (0xc00060ce60) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.33.3 80\nConnection to 10.96.33.3 80 port [tcp/http] succeeded!\nI0505 00:16:10.597278 1831 log.go:172] (0xc0006ac640) (3) Data frame handling\nI0505 00:16:10.597311 1831 log.go:172] (0xc00060ce60) (5) Data frame sent\nI0505 00:16:10.597337 1831 log.go:172] (0xc000af53f0) Data frame received for 5\nI0505 00:16:10.597351 1831 log.go:172] (0xc00060ce60) (5) Data frame handling\nI0505 00:16:10.598325 1831 log.go:172] (0xc000af53f0) Data frame received for 1\nI0505 00:16:10.598342 1831 log.go:172] (0xc000611cc0) (1) Data frame handling\nI0505 00:16:10.598353 1831 log.go:172] (0xc000611cc0) (1) Data frame sent\nI0505 00:16:10.598365 1831 log.go:172] (0xc000af53f0) (0xc000611cc0) Stream removed, broadcasting: 1\nI0505 00:16:10.598376 1831 log.go:172] (0xc000af53f0) Go away received\nI0505 00:16:10.598734 1831 log.go:172] (0xc000af53f0) (0xc000611cc0) Stream removed, broadcasting: 1\nI0505 00:16:10.598751 1831 log.go:172] (0xc000af53f0) (0xc0006ac640) Stream removed, broadcasting: 3\nI0505 00:16:10.598760 1831 log.go:172] (0xc000af53f0) (0xc00060ce60) Stream removed, broadcasting: 5\n" May 5 00:16:10.603: INFO: stdout: "" May 5 00:16:10.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-748 execpod-affinityxps9n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.33.3:80/ ; done' May 5 00:16:10.908: INFO: stderr: "I0505 00:16:10.734339 1851 log.go:172] (0xc000940bb0) (0xc0008f9cc0) Create stream\nI0505 00:16:10.734409 1851 log.go:172] (0xc000940bb0) (0xc0008f9cc0) Stream added, broadcasting: 1\nI0505 00:16:10.737950 1851 log.go:172] (0xc000940bb0) Reply frame received for 1\nI0505 00:16:10.738002 1851 log.go:172] (0xc000940bb0) (0xc0008ed720) Create stream\nI0505 00:16:10.738020 1851 log.go:172] (0xc000940bb0) (0xc0008ed720) Stream added, broadcasting: 3\nI0505 00:16:10.739148 1851 log.go:172] (0xc000940bb0) Reply frame received for 3\nI0505 00:16:10.739184 1851 log.go:172] (0xc000940bb0) (0xc0008e34a0) Create stream\nI0505 00:16:10.739195 1851 log.go:172] (0xc000940bb0) (0xc0008e34a0) Stream added, broadcasting: 5\nI0505 00:16:10.740190 1851 log.go:172] (0xc000940bb0) Reply frame received for 5\nI0505 00:16:10.814009 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.814052 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.814068 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.814092 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.814103 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.814127 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.820052 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.820076 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.820097 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.820646 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.820681 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.820691 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.820710 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.820790 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.820837 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.827975 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.827997 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.828019 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.828527 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.828549 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.828561 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.828578 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.828595 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.828631 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.834039 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.834055 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.834069 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.834930 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.834955 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.834969 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.834980 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.834986 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.834992 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.839681 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.839706 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.839746 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.840125 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.840148 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.840163 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.840181 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.840203 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.840223 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.845904 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.845926 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.845944 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.846438 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.846464 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.846479 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.846501 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.846518 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.846534 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.851074 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.851096 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.851113 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.851968 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.851996 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.852010 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.852032 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.852066 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.852104 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.852150 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.852167 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.852211 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.856693 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.856713 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.856730 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.857486 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.857507 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.857517 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.857538 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.857551 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.857568 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.860521 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.860545 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.860562 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.861430 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.861452 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.861465 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.861483 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.861494 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.861505 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.861518 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.861528 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.861550 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.866017 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.866038 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.866059 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.866436 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.866469 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.866484 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.866503 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.866515 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.866528 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.866542 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.866554 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.866589 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.870360 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.870379 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.870397 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.871025 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.871071 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.871098 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.871146 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.871172 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.871191 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.874865 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.874891 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.874911 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.875347 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.875368 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.875381 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.875399 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.875418 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.875431 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.879822 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.879844 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.879863 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.880378 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.880410 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.880426 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.880445 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.880470 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.880497 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.884256 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.884279 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.884296 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.884556 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.884575 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.884594 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\n+ curl -q -sI0505 00:16:10.884619 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.884670 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.884689 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.884709 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.884733 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.884766 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.889083 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.889105 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.889286 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.889617 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.889654 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.889676 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.889703 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.889726 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.889751 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.889767 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.889781 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.889812 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\nI0505 00:16:10.893357 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.893376 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.893389 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.893945 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.893973 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.893986 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.894005 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.894015 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.894027 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ echo\nI0505 00:16:10.894042 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.894059 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.894072 1851 log.go:172] (0xc0008e34a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.33.3:80/\nI0505 00:16:10.899373 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.899397 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.899419 1851 log.go:172] (0xc0008ed720) (3) Data frame sent\nI0505 00:16:10.900246 1851 log.go:172] (0xc000940bb0) Data frame received for 5\nI0505 00:16:10.900287 1851 log.go:172] (0xc0008e34a0) (5) Data frame handling\nI0505 00:16:10.900313 1851 log.go:172] (0xc000940bb0) Data frame received for 3\nI0505 00:16:10.900346 1851 log.go:172] (0xc0008ed720) (3) Data frame handling\nI0505 00:16:10.902705 1851 log.go:172] (0xc000940bb0) Data frame received for 1\nI0505 00:16:10.902736 1851 log.go:172] (0xc0008f9cc0) (1) Data frame handling\nI0505 00:16:10.902747 1851 log.go:172] (0xc0008f9cc0) (1) Data frame sent\nI0505 00:16:10.902763 1851 log.go:172] (0xc000940bb0) (0xc0008f9cc0) Stream removed, broadcasting: 1\nI0505 00:16:10.902785 1851 log.go:172] (0xc000940bb0) Go away received\nI0505 00:16:10.903206 1851 log.go:172] (0xc000940bb0) (0xc0008f9cc0) Stream removed, broadcasting: 1\nI0505 00:16:10.903235 1851 log.go:172] (0xc000940bb0) (0xc0008ed720) Stream removed, broadcasting: 3\nI0505 00:16:10.903247 1851 log.go:172] (0xc000940bb0) (0xc0008e34a0) Stream removed, broadcasting: 5\n" May 5 00:16:10.909: INFO: stdout: "\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh\naffinity-clusterip-wnhfh" May 5 00:16:10.909: INFO: Received response from host: May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Received response from host: affinity-clusterip-wnhfh May 5 00:16:10.909: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-748, will wait for the garbage collector to delete the pods May 5 00:16:11.025: INFO: Deleting ReplicationController affinity-clusterip took: 15.378707ms May 5 00:16:11.325: INFO: Terminating ReplicationController affinity-clusterip pods took: 300.482716ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:24.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-748" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.004 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":107,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:24.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:16:25.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff" in namespace "projected-5186" to be "Succeeded or Failed" May 5 00:16:25.052: INFO: Pod "downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff": Phase="Pending", Reason="", readiness=false. Elapsed: 47.67963ms May 5 00:16:27.184: INFO: Pod "downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180020092s May 5 00:16:29.189: INFO: Pod "downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185037076s STEP: Saw pod success May 5 00:16:29.189: INFO: Pod "downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff" satisfied condition "Succeeded or Failed" May 5 00:16:29.193: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff container client-container: STEP: delete the pod May 5 00:16:29.232: INFO: Waiting for pod downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff to disappear May 5 00:16:29.247: INFO: Pod downwardapi-volume-e5830f59-7ff5-436e-ab1a-229f2b2794ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:29.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5186" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1868,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:29.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 5 00:16:29.354: INFO: Waiting up to 5m0s for pod "pod-9399ad46-0373-4606-8116-0434799ec6b3" in namespace "emptydir-7606" to be "Succeeded or Failed" May 5 00:16:29.361: INFO: Pod "pod-9399ad46-0373-4606-8116-0434799ec6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594997ms May 5 00:16:31.373: INFO: Pod "pod-9399ad46-0373-4606-8116-0434799ec6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019098097s May 5 00:16:33.377: INFO: Pod "pod-9399ad46-0373-4606-8116-0434799ec6b3": Phase="Running", Reason="", readiness=true. Elapsed: 4.023058481s May 5 00:16:35.393: INFO: Pod "pod-9399ad46-0373-4606-8116-0434799ec6b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038859577s STEP: Saw pod success May 5 00:16:35.393: INFO: Pod "pod-9399ad46-0373-4606-8116-0434799ec6b3" satisfied condition "Succeeded or Failed" May 5 00:16:35.396: INFO: Trying to get logs from node latest-worker pod pod-9399ad46-0373-4606-8116-0434799ec6b3 container test-container: STEP: delete the pod May 5 00:16:35.432: INFO: Waiting for pod pod-9399ad46-0373-4606-8116-0434799ec6b3 to disappear May 5 00:16:35.440: INFO: Pod pod-9399ad46-0373-4606-8116-0434799ec6b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:35.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7606" for this suite. • [SLOW TEST:6.192 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:35.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-32aabcf0-29e7-4ae9-b518-e72cea297f77 STEP: Creating a pod to test consume configMaps May 5 00:16:35.532: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5" in namespace "projected-6590" to be "Succeeded or Failed" May 5 00:16:35.557: INFO: Pod "pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.548418ms May 5 00:16:37.561: INFO: Pod "pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029517371s May 5 00:16:39.566: INFO: Pod "pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033874129s STEP: Saw pod success May 5 00:16:39.566: INFO: Pod "pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5" satisfied condition "Succeeded or Failed" May 5 00:16:39.569: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5 container projected-configmap-volume-test: STEP: delete the pod May 5 00:16:39.637: INFO: Waiting for pod pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5 to disappear May 5 00:16:39.656: INFO: Pod pod-projected-configmaps-0878418a-a1a6-4566-847c-e9e5a810beb5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:39.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6590" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":110,"skipped":1894,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:39.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 5 00:16:44.866: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:44.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3115" for this suite. • [SLOW TEST:5.327 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":111,"skipped":1902,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:45.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 00:16:50.192: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:50.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9399" for this suite. • [SLOW TEST:5.259 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1908,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:50.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-26bb6718-e297-495f-ae6a-3f851a51e4fe STEP: Creating secret with name s-test-opt-upd-045e92fe-8da6-44b0-ba1c-605f42d337c0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-26bb6718-e297-495f-ae6a-3f851a51e4fe STEP: Updating secret s-test-opt-upd-045e92fe-8da6-44b0-ba1c-605f42d337c0 STEP: Creating secret with name s-test-opt-create-daa14d8b-7e90-4342-ab25-dd76911ee1ff STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:16:59.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5863" for this suite. • [SLOW TEST:9.110 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1924,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:16:59.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 5 00:16:59.471: INFO: Waiting up to 5m0s for pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe" in namespace "downward-api-3587" to be "Succeeded or Failed" May 5 00:16:59.507: INFO: Pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 35.313716ms May 5 00:17:01.592: INFO: Pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12038847s May 5 00:17:03.596: INFO: Pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.125230485s May 5 00:17:05.601: INFO: Pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12998391s STEP: Saw pod success May 5 00:17:05.601: INFO: Pod "downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe" satisfied condition "Succeeded or Failed" May 5 00:17:05.604: INFO: Trying to get logs from node latest-worker2 pod downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe container dapi-container: STEP: delete the pod May 5 00:17:05.649: INFO: Waiting for pod downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe to disappear May 5 00:17:05.663: INFO: Pod downward-api-129987e7-1ca1-47de-b2e0-339d9262d5fe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3587" for this suite. • [SLOW TEST:6.290 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1929,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:05.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:17:05.751: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c7a3e69c-08db-4a35-89f4-d4ad8829dedd" in namespace "security-context-test-9981" to be "Succeeded or Failed" May 5 00:17:05.759: INFO: Pod "busybox-user-65534-c7a3e69c-08db-4a35-89f4-d4ad8829dedd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.701885ms May 5 00:17:08.095: INFO: Pod "busybox-user-65534-c7a3e69c-08db-4a35-89f4-d4ad8829dedd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343111713s May 5 00:17:10.098: INFO: Pod "busybox-user-65534-c7a3e69c-08db-4a35-89f4-d4ad8829dedd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.346948242s May 5 00:17:10.098: INFO: Pod "busybox-user-65534-c7a3e69c-08db-4a35-89f4-d4ad8829dedd" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:10.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9981" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1930,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:10.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:27.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7187" for this suite. • [SLOW TEST:17.353 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":116,"skipped":1943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:27.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-d3e263db-51f6-4bff-8657-ed637172b30f STEP: Creating secret with name s-test-opt-upd-4d1e93d3-7351-4d8a-af15-fc52701bf939 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d3e263db-51f6-4bff-8657-ed637172b30f STEP: Updating secret s-test-opt-upd-4d1e93d3-7351-4d8a-af15-fc52701bf939 STEP: Creating secret with name s-test-opt-create-a6290583-f749-4135-b03e-7733f12c134b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:35.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2339" for this suite. • [SLOW TEST:8.200 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":2030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:35.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 5 00:17:35.754: INFO: Waiting up to 5m0s for pod "var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f" in namespace "var-expansion-1807" to be "Succeeded or Failed" May 5 00:17:35.770: INFO: Pod "var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.062156ms May 5 00:17:37.774: INFO: Pod "var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0200671s May 5 00:17:39.778: INFO: Pod "var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024471405s STEP: Saw pod success May 5 00:17:39.779: INFO: Pod "var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f" satisfied condition "Succeeded or Failed" May 5 00:17:39.782: INFO: Trying to get logs from node latest-worker pod var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f container dapi-container: STEP: delete the pod May 5 00:17:39.872: INFO: Waiting for pod var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f to disappear May 5 00:17:39.875: INFO: Pod var-expansion-c27f96df-e4ba-436f-95f5-bf62c47ea29f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:39.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1807" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":2055,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:39.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 5 00:17:39.928: INFO: namespace kubectl-23 May 5 00:17:39.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-23' May 5 00:17:40.173: INFO: stderr: "" May 5 00:17:40.173: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 00:17:41.176: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:41.176: INFO: Found 0 / 1 May 5 00:17:42.221: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:42.221: INFO: Found 0 / 1 May 5 00:17:43.200: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:43.200: INFO: Found 0 / 1 May 5 00:17:44.178: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:44.178: INFO: Found 0 / 1 May 5 00:17:45.177: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:45.177: INFO: Found 1 / 1 May 5 00:17:45.177: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 00:17:45.181: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:17:45.181: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 00:17:45.181: INFO: wait on agnhost-master startup in kubectl-23 May 5 00:17:45.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-hjd77 agnhost-master --namespace=kubectl-23' May 5 00:17:45.297: INFO: stderr: "" May 5 00:17:45.297: INFO: stdout: "Paused\n" STEP: exposing RC May 5 00:17:45.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-23' May 5 00:17:45.463: INFO: stderr: "" May 5 00:17:45.463: INFO: stdout: "service/rm2 exposed\n" May 5 00:17:45.492: INFO: Service rm2 in namespace kubectl-23 found. STEP: exposing service May 5 00:17:47.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-23' May 5 00:17:47.631: INFO: stderr: "" May 5 00:17:47.631: INFO: stdout: "service/rm3 exposed\n" May 5 00:17:47.639: INFO: Service rm3 in namespace kubectl-23 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:49.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-23" for this suite. • [SLOW TEST:9.772 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":119,"skipped":2055,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:49.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:17:49.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528" in namespace "projected-4449" to be "Succeeded or Failed" May 5 00:17:49.769: INFO: Pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528": Phase="Pending", Reason="", readiness=false. Elapsed: 21.057008ms May 5 00:17:52.444: INFO: Pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695487348s May 5 00:17:54.448: INFO: Pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528": Phase="Running", Reason="", readiness=true. Elapsed: 4.699574253s May 5 00:17:56.452: INFO: Pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.703430792s STEP: Saw pod success May 5 00:17:56.452: INFO: Pod "downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528" satisfied condition "Succeeded or Failed" May 5 00:17:56.455: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528 container client-container: STEP: delete the pod May 5 00:17:56.488: INFO: Waiting for pod downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528 to disappear May 5 00:17:56.502: INFO: Pod downwardapi-volume-ee196036-006e-48d5-af35-7817d304c528 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:17:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4449" for this suite. • [SLOW TEST:6.873 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:17:56.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4995/configmap-test-b0ae6b36-16c8-4189-95fa-ba1a4b9d8a6d STEP: Creating a pod to test consume configMaps May 5 00:17:56.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330" in namespace "configmap-4995" to be "Succeeded or Failed" May 5 00:17:56.754: INFO: Pod "pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330": Phase="Pending", Reason="", readiness=false. Elapsed: 16.314063ms May 5 00:17:58.758: INFO: Pod "pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020256988s May 5 00:18:00.760: INFO: Pod "pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02281395s STEP: Saw pod success May 5 00:18:00.761: INFO: Pod "pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330" satisfied condition "Succeeded or Failed" May 5 00:18:00.763: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330 container env-test: STEP: delete the pod May 5 00:18:00.810: INFO: Waiting for pod pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330 to disappear May 5 00:18:00.861: INFO: Pod pod-configmaps-b66c43b7-e72b-4803-8156-4266141bc330 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:00.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4995" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":2090,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:00.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 5 00:18:01.158: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:07.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8468" for this suite. • [SLOW TEST:6.266 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":122,"skipped":2093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:07.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:18:08.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:18:10.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234688, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234688, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234688, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234688, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:18:13.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3670" for this suite. STEP: Destroying namespace "webhook-3670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.987 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":123,"skipped":2158,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:14.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:14.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3073" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":124,"skipped":2173,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:14.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:18:14.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b" in namespace "downward-api-9794" to be "Succeeded or Failed" May 5 00:18:14.912: INFO: Pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.698869ms May 5 00:18:16.916: INFO: Pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069566128s May 5 00:18:18.921: INFO: Pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b": Phase="Running", Reason="", readiness=true. Elapsed: 4.074258608s May 5 00:18:20.926: INFO: Pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079174415s STEP: Saw pod success May 5 00:18:20.926: INFO: Pod "downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b" satisfied condition "Succeeded or Failed" May 5 00:18:20.929: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b container client-container: STEP: delete the pod May 5 00:18:20.969: INFO: Waiting for pod downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b to disappear May 5 00:18:20.988: INFO: Pod downwardapi-volume-048f7ac8-700c-48f7-b025-d736f4d97b2b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:20.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9794" for this suite. • [SLOW TEST:6.743 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":2184,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:20.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-c4rx STEP: Creating a pod to test atomic-volume-subpath May 5 00:18:21.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-c4rx" in namespace "subpath-5386" to be "Succeeded or Failed" May 5 00:18:21.222: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Pending", Reason="", readiness=false. Elapsed: 72.205848ms May 5 00:18:23.313: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163871301s May 5 00:18:25.318: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 4.16875685s May 5 00:18:27.322: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 6.172769571s May 5 00:18:29.331: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 8.181526182s May 5 00:18:31.335: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 10.185675062s May 5 00:18:33.339: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 12.189713156s May 5 00:18:35.344: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 14.194507913s May 5 00:18:37.348: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 16.198386415s May 5 00:18:39.351: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 18.201920959s May 5 00:18:41.355: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 20.205576377s May 5 00:18:43.360: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Running", Reason="", readiness=true. Elapsed: 22.210250244s May 5 00:18:45.365: INFO: Pod "pod-subpath-test-projected-c4rx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.215257881s STEP: Saw pod success May 5 00:18:45.365: INFO: Pod "pod-subpath-test-projected-c4rx" satisfied condition "Succeeded or Failed" May 5 00:18:45.368: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-c4rx container test-container-subpath-projected-c4rx: STEP: delete the pod May 5 00:18:45.448: INFO: Waiting for pod pod-subpath-test-projected-c4rx to disappear May 5 00:18:45.580: INFO: Pod pod-subpath-test-projected-c4rx no longer exists STEP: Deleting pod pod-subpath-test-projected-c4rx May 5 00:18:45.580: INFO: Deleting pod "pod-subpath-test-projected-c4rx" in namespace "subpath-5386" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:45.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5386" for this suite. • [SLOW TEST:24.594 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":126,"skipped":2184,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:45.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:50.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8069" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":127,"skipped":2192,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:50.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 5 00:18:50.134: INFO: Waiting up to 5m0s for pod "downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0" in namespace "downward-api-4627" to be "Succeeded or Failed" May 5 00:18:50.341: INFO: Pod "downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 206.904411ms May 5 00:18:52.345: INFO: Pod "downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211320946s May 5 00:18:54.350: INFO: Pod "downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216265778s STEP: Saw pod success May 5 00:18:54.350: INFO: Pod "downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0" satisfied condition "Succeeded or Failed" May 5 00:18:54.354: INFO: Trying to get logs from node latest-worker pod downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0 container dapi-container: STEP: delete the pod May 5 00:18:54.390: INFO: Waiting for pod downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0 to disappear May 5 00:18:54.395: INFO: Pod downward-api-f5551e2e-8faa-453c-9447-359e724d1fd0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:18:54.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4627" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2194,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:18:54.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6087 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6087 STEP: creating replication controller externalsvc in namespace services-6087 I0505 00:18:54.612526 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6087, replica count: 2 I0505 00:18:57.663047 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:19:00.663329 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 5 00:19:00.748: INFO: Creating new exec pod May 5 00:19:04.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6087 execpodgk7p7 -- /bin/sh -x -c nslookup clusterip-service' May 5 00:19:04.980: INFO: stderr: "I0505 00:19:04.902526 1958 log.go:172] (0xc000ab4160) (0xc0006efd60) Create stream\nI0505 00:19:04.902590 1958 log.go:172] (0xc000ab4160) (0xc0006efd60) Stream added, broadcasting: 1\nI0505 00:19:04.905038 1958 log.go:172] (0xc000ab4160) Reply frame received for 1\nI0505 00:19:04.905072 1958 log.go:172] (0xc000ab4160) (0xc0006feb40) Create stream\nI0505 00:19:04.905084 1958 log.go:172] (0xc000ab4160) (0xc0006feb40) Stream added, broadcasting: 3\nI0505 00:19:04.906020 1958 log.go:172] (0xc000ab4160) Reply frame received for 3\nI0505 00:19:04.906045 1958 log.go:172] (0xc000ab4160) (0xc0006ff040) Create stream\nI0505 00:19:04.906056 1958 log.go:172] (0xc000ab4160) (0xc0006ff040) Stream added, broadcasting: 5\nI0505 00:19:04.906822 1958 log.go:172] (0xc000ab4160) Reply frame received for 5\nI0505 00:19:04.966372 1958 log.go:172] (0xc000ab4160) Data frame received for 5\nI0505 00:19:04.966397 1958 log.go:172] (0xc0006ff040) (5) Data frame handling\nI0505 00:19:04.966418 1958 log.go:172] (0xc0006ff040) (5) Data frame sent\n+ nslookup clusterip-service\nI0505 00:19:04.972462 1958 log.go:172] (0xc000ab4160) Data frame received for 3\nI0505 00:19:04.972480 1958 log.go:172] (0xc0006feb40) (3) Data frame handling\nI0505 00:19:04.972495 1958 log.go:172] (0xc0006feb40) (3) Data frame sent\nI0505 00:19:04.973304 1958 log.go:172] (0xc000ab4160) Data frame received for 3\nI0505 00:19:04.973318 1958 log.go:172] (0xc0006feb40) (3) Data frame handling\nI0505 00:19:04.973328 1958 log.go:172] (0xc0006feb40) (3) Data frame sent\nI0505 00:19:04.973800 1958 log.go:172] (0xc000ab4160) Data frame received for 3\nI0505 00:19:04.973816 1958 log.go:172] (0xc0006feb40) (3) Data frame handling\nI0505 00:19:04.973953 1958 log.go:172] (0xc000ab4160) Data frame received for 5\nI0505 00:19:04.973971 1958 log.go:172] (0xc0006ff040) (5) Data frame handling\nI0505 00:19:04.975509 1958 log.go:172] (0xc000ab4160) Data frame received for 1\nI0505 00:19:04.975525 1958 log.go:172] (0xc0006efd60) (1) Data frame handling\nI0505 00:19:04.975532 1958 log.go:172] (0xc0006efd60) (1) Data frame sent\nI0505 00:19:04.975546 1958 log.go:172] (0xc000ab4160) (0xc0006efd60) Stream removed, broadcasting: 1\nI0505 00:19:04.975586 1958 log.go:172] (0xc000ab4160) Go away received\nI0505 00:19:04.975842 1958 log.go:172] (0xc000ab4160) (0xc0006efd60) Stream removed, broadcasting: 1\nI0505 00:19:04.975858 1958 log.go:172] (0xc000ab4160) (0xc0006feb40) Stream removed, broadcasting: 3\nI0505 00:19:04.975866 1958 log.go:172] (0xc000ab4160) (0xc0006ff040) Stream removed, broadcasting: 5\n" May 5 00:19:04.980: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6087.svc.cluster.local\tcanonical name = externalsvc.services-6087.svc.cluster.local.\nName:\texternalsvc.services-6087.svc.cluster.local\nAddress: 10.110.102.219\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6087, will wait for the garbage collector to delete the pods May 5 00:19:05.040: INFO: Deleting ReplicationController externalsvc took: 6.647519ms May 5 00:19:05.340: INFO: Terminating ReplicationController externalsvc pods took: 300.232376ms May 5 00:19:15.308: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:19:15.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6087" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.981 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":129,"skipped":2195,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:19:15.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:19:15.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b" in namespace "downward-api-6175" to be "Succeeded or Failed" May 5 00:19:15.521: INFO: Pod "downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.995096ms May 5 00:19:17.524: INFO: Pod "downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037157289s May 5 00:19:19.706: INFO: Pod "downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219122468s STEP: Saw pod success May 5 00:19:19.707: INFO: Pod "downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b" satisfied condition "Succeeded or Failed" May 5 00:19:19.714: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b container client-container: STEP: delete the pod May 5 00:19:19.743: INFO: Waiting for pod downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b to disappear May 5 00:19:19.797: INFO: Pod downwardapi-volume-5fc83bf7-d915-410f-aaa9-66e3ab0d275b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:19:19.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6175" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2211,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:19:19.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1084 STEP: creating service affinity-clusterip-transition in namespace services-1084 STEP: creating replication controller affinity-clusterip-transition in namespace services-1084 I0505 00:19:20.011633 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1084, replica count: 3 I0505 00:19:23.062029 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:19:26.062333 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:19:26.067: INFO: Creating new exec pod May 5 00:19:31.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1084 execpod-affinitynxgq2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 5 00:19:31.326: INFO: stderr: "I0505 00:19:31.239356 1978 log.go:172] (0xc0005d82c0) (0xc0003ce960) Create stream\nI0505 00:19:31.239526 1978 log.go:172] (0xc0005d82c0) (0xc0003ce960) Stream added, broadcasting: 1\nI0505 00:19:31.241739 1978 log.go:172] (0xc0005d82c0) Reply frame received for 1\nI0505 00:19:31.241773 1978 log.go:172] (0xc0005d82c0) (0xc000ac4000) Create stream\nI0505 00:19:31.241783 1978 log.go:172] (0xc0005d82c0) (0xc000ac4000) Stream added, broadcasting: 3\nI0505 00:19:31.242669 1978 log.go:172] (0xc0005d82c0) Reply frame received for 3\nI0505 00:19:31.242707 1978 log.go:172] (0xc0005d82c0) (0xc000ac40a0) Create stream\nI0505 00:19:31.242732 1978 log.go:172] (0xc0005d82c0) (0xc000ac40a0) Stream added, broadcasting: 5\nI0505 00:19:31.243556 1978 log.go:172] (0xc0005d82c0) Reply frame received for 5\nI0505 00:19:31.319856 1978 log.go:172] (0xc0005d82c0) Data frame received for 5\nI0505 00:19:31.319922 1978 log.go:172] (0xc000ac40a0) (5) Data frame handling\nI0505 00:19:31.319953 1978 log.go:172] (0xc000ac40a0) (5) Data frame sent\nI0505 00:19:31.319966 1978 log.go:172] (0xc0005d82c0) Data frame received for 5\nI0505 00:19:31.319974 1978 log.go:172] (0xc000ac40a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0505 00:19:31.320036 1978 log.go:172] (0xc000ac40a0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0505 00:19:31.320405 1978 log.go:172] (0xc0005d82c0) Data frame received for 3\nI0505 00:19:31.320446 1978 log.go:172] (0xc0005d82c0) Data frame received for 5\nI0505 00:19:31.320474 1978 log.go:172] (0xc000ac40a0) (5) Data frame handling\nI0505 00:19:31.320499 1978 log.go:172] (0xc000ac4000) (3) Data frame handling\nI0505 00:19:31.322304 1978 log.go:172] (0xc0005d82c0) Data frame received for 1\nI0505 00:19:31.322331 1978 log.go:172] (0xc0003ce960) (1) Data frame handling\nI0505 00:19:31.322345 1978 log.go:172] (0xc0003ce960) (1) Data frame sent\nI0505 00:19:31.322365 1978 log.go:172] (0xc0005d82c0) (0xc0003ce960) Stream removed, broadcasting: 1\nI0505 00:19:31.322391 1978 log.go:172] (0xc0005d82c0) Go away received\nI0505 00:19:31.322754 1978 log.go:172] (0xc0005d82c0) (0xc0003ce960) Stream removed, broadcasting: 1\nI0505 00:19:31.322770 1978 log.go:172] (0xc0005d82c0) (0xc000ac4000) Stream removed, broadcasting: 3\nI0505 00:19:31.322778 1978 log.go:172] (0xc0005d82c0) (0xc000ac40a0) Stream removed, broadcasting: 5\n" May 5 00:19:31.327: INFO: stdout: "" May 5 00:19:31.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1084 execpod-affinitynxgq2 -- /bin/sh -x -c nc -zv -t -w 2 10.109.245.201 80' May 5 00:19:31.534: INFO: stderr: "I0505 00:19:31.472247 2001 log.go:172] (0xc000a5b290) (0xc000af2320) Create stream\nI0505 00:19:31.472327 2001 log.go:172] (0xc000a5b290) (0xc000af2320) Stream added, broadcasting: 1\nI0505 00:19:31.477409 2001 log.go:172] (0xc000a5b290) Reply frame received for 1\nI0505 00:19:31.477446 2001 log.go:172] (0xc000a5b290) (0xc00084e6e0) Create stream\nI0505 00:19:31.477457 2001 log.go:172] (0xc000a5b290) (0xc00084e6e0) Stream added, broadcasting: 3\nI0505 00:19:31.478181 2001 log.go:172] (0xc000a5b290) Reply frame received for 3\nI0505 00:19:31.478210 2001 log.go:172] (0xc000a5b290) (0xc00067ee60) Create stream\nI0505 00:19:31.478218 2001 log.go:172] (0xc000a5b290) (0xc00067ee60) Stream added, broadcasting: 5\nI0505 00:19:31.479091 2001 log.go:172] (0xc000a5b290) Reply frame received for 5\nI0505 00:19:31.528439 2001 log.go:172] (0xc000a5b290) Data frame received for 5\nI0505 00:19:31.528485 2001 log.go:172] (0xc00067ee60) (5) Data frame handling\nI0505 00:19:31.528505 2001 log.go:172] (0xc00067ee60) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.245.201 80\nConnection to 10.109.245.201 80 port [tcp/http] succeeded!\nI0505 00:19:31.528530 2001 log.go:172] (0xc000a5b290) Data frame received for 3\nI0505 00:19:31.528543 2001 log.go:172] (0xc00084e6e0) (3) Data frame handling\nI0505 00:19:31.528594 2001 log.go:172] (0xc000a5b290) Data frame received for 5\nI0505 00:19:31.528644 2001 log.go:172] (0xc00067ee60) (5) Data frame handling\nI0505 00:19:31.530492 2001 log.go:172] (0xc000a5b290) Data frame received for 1\nI0505 00:19:31.530514 2001 log.go:172] (0xc000af2320) (1) Data frame handling\nI0505 00:19:31.530532 2001 log.go:172] (0xc000af2320) (1) Data frame sent\nI0505 00:19:31.530548 2001 log.go:172] (0xc000a5b290) (0xc000af2320) Stream removed, broadcasting: 1\nI0505 00:19:31.530564 2001 log.go:172] (0xc000a5b290) Go away received\nI0505 00:19:31.530963 2001 log.go:172] (0xc000a5b290) (0xc000af2320) Stream removed, broadcasting: 1\nI0505 00:19:31.530991 2001 log.go:172] (0xc000a5b290) (0xc00084e6e0) Stream removed, broadcasting: 3\nI0505 00:19:31.531004 2001 log.go:172] (0xc000a5b290) (0xc00067ee60) Stream removed, broadcasting: 5\n" May 5 00:19:31.534: INFO: stdout: "" May 5 00:19:31.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1084 execpod-affinitynxgq2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.245.201:80/ ; done' May 5 00:19:31.847: INFO: stderr: "I0505 00:19:31.679366 2021 log.go:172] (0xc000bb11e0) (0xc000bb2460) Create stream\nI0505 00:19:31.679434 2021 log.go:172] (0xc000bb11e0) (0xc000bb2460) Stream added, broadcasting: 1\nI0505 00:19:31.683270 2021 log.go:172] (0xc000bb11e0) Reply frame received for 1\nI0505 00:19:31.683327 2021 log.go:172] (0xc000bb11e0) (0xc0006b8500) Create stream\nI0505 00:19:31.683344 2021 log.go:172] (0xc000bb11e0) (0xc0006b8500) Stream added, broadcasting: 3\nI0505 00:19:31.684459 2021 log.go:172] (0xc000bb11e0) Reply frame received for 3\nI0505 00:19:31.684493 2021 log.go:172] (0xc000bb11e0) (0xc00061a500) Create stream\nI0505 00:19:31.684502 2021 log.go:172] (0xc000bb11e0) (0xc00061a500) Stream added, broadcasting: 5\nI0505 00:19:31.685451 2021 log.go:172] (0xc000bb11e0) Reply frame received for 5\nI0505 00:19:31.746837 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.746868 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.746879 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.746888 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.746895 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.746915 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.753326 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.753365 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.753388 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.753996 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.754024 2021 log.go:172] (0xc00061a500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.754047 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.754069 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.754099 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.754141 2021 log.go:172] (0xc00061a500) (5) Data frame sent\nI0505 00:19:31.758396 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.758414 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.758431 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.759359 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.759390 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.759404 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.759425 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.759446 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.759474 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.766339 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.766353 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.766360 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.767122 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.767147 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.767159 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.767184 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.767201 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.767212 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.771172 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.771190 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.771198 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.771712 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.771738 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.771752 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.771770 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.771778 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.771796 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.776579 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.776604 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.776617 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.777657 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.777682 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.777712 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.777862 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.777884 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.777903 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.782892 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.782930 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.782966 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.783786 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.783802 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.783810 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.783822 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.783828 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.783834 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.790593 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.790630 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.790666 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.791387 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.791416 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.791430 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.791450 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.791461 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.791473 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.797078 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.797100 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.797328 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.797767 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.797797 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.797839 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.797910 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.797933 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.797946 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.803046 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.803083 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.803114 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.803299 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.803310 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.803316 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/I0505 00:19:31.803415 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.803446 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.803468 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.803501 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.803521 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.803544 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n\nI0505 00:19:31.808471 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.808484 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.808491 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.808876 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.808887 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.808894 2021 log.go:172] (0xc00061a500) (5) Data frame sent\nI0505 00:19:31.808899 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.808903 2021 log.go:172] (0xc00061a500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.808914 2021 log.go:172] (0xc00061a500) (5) Data frame sent\nI0505 00:19:31.808983 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.809008 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.809036 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.814391 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.814409 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.814419 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.815048 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.815071 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.815093 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.815128 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.815163 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.815191 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.819869 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.819883 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.819891 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.820288 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.820311 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.820332 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.820436 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.820453 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.820471 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.824636 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.824658 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.824679 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.825336 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.825359 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.825373 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.825399 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.825420 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.825436 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.829817 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.829847 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.829873 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.830226 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.830255 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.830269 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.830287 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.830297 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.830308 2021 log.go:172] (0xc00061a500) (5) Data frame sent\nI0505 00:19:31.830321 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.830335 2021 log.go:172] (0xc00061a500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.830370 2021 log.go:172] (0xc00061a500) (5) Data frame sent\nI0505 00:19:31.834683 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.834704 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.834724 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.835239 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.835272 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.835288 2021 log.go:172] (0xc00061a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:31.835307 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.835325 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.835339 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.840145 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.840183 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.840225 2021 log.go:172] (0xc0006b8500) (3) Data frame sent\nI0505 00:19:31.840875 2021 log.go:172] (0xc000bb11e0) Data frame received for 5\nI0505 00:19:31.840898 2021 log.go:172] (0xc00061a500) (5) Data frame handling\nI0505 00:19:31.841103 2021 log.go:172] (0xc000bb11e0) Data frame received for 3\nI0505 00:19:31.841340 2021 log.go:172] (0xc0006b8500) (3) Data frame handling\nI0505 00:19:31.843116 2021 log.go:172] (0xc000bb11e0) Data frame received for 1\nI0505 00:19:31.843149 2021 log.go:172] (0xc000bb2460) (1) Data frame handling\nI0505 00:19:31.843174 2021 log.go:172] (0xc000bb2460) (1) Data frame sent\nI0505 00:19:31.843190 2021 log.go:172] (0xc000bb11e0) (0xc000bb2460) Stream removed, broadcasting: 1\nI0505 00:19:31.843210 2021 log.go:172] (0xc000bb11e0) Go away received\nI0505 00:19:31.843578 2021 log.go:172] (0xc000bb11e0) (0xc000bb2460) Stream removed, broadcasting: 1\nI0505 00:19:31.843598 2021 log.go:172] (0xc000bb11e0) (0xc0006b8500) Stream removed, broadcasting: 3\nI0505 00:19:31.843607 2021 log.go:172] (0xc000bb11e0) (0xc00061a500) Stream removed, broadcasting: 5\n" May 5 00:19:31.848: INFO: stdout: "\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-ld7fg\naffinity-clusterip-transition-ld7fg\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-ld7fg\naffinity-clusterip-transition-ld7fg\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-rlnqj\naffinity-clusterip-transition-ld7fg\naffinity-clusterip-transition-ld7fg" May 5 00:19:31.848: INFO: Received response from host: May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-rlnqj May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.848: INFO: Received response from host: affinity-clusterip-transition-ld7fg May 5 00:19:31.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1084 execpod-affinitynxgq2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.245.201:80/ ; done' May 5 00:19:32.159: INFO: stderr: "I0505 00:19:31.990135 2041 log.go:172] (0xc00091d080) (0xc00094e820) Create stream\nI0505 00:19:31.990195 2041 log.go:172] (0xc00091d080) (0xc00094e820) Stream added, broadcasting: 1\nI0505 00:19:31.994952 2041 log.go:172] (0xc00091d080) Reply frame received for 1\nI0505 00:19:31.995016 2041 log.go:172] (0xc00091d080) (0xc0006ef180) Create stream\nI0505 00:19:31.995036 2041 log.go:172] (0xc00091d080) (0xc0006ef180) Stream added, broadcasting: 3\nI0505 00:19:31.995998 2041 log.go:172] (0xc00091d080) Reply frame received for 3\nI0505 00:19:31.996027 2041 log.go:172] (0xc00091d080) (0xc00063a500) Create stream\nI0505 00:19:31.996035 2041 log.go:172] (0xc00091d080) (0xc00063a500) Stream added, broadcasting: 5\nI0505 00:19:31.996908 2041 log.go:172] (0xc00091d080) Reply frame received for 5\nI0505 00:19:32.055083 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.055148 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.055175 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.055206 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.055237 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.055272 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.060778 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.060815 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.060848 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.061057 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.061071 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.061077 2041 log.go:172] (0xc00063a500) (5) Data frame sent\nI0505 00:19:32.061092 2041 log.go:172] (0xc00091d080) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0505 00:19:32.061103 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.061362 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n 2 http://10.109.245.201:80/\nI0505 00:19:32.061428 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.061455 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.061488 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.068312 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.068335 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.068357 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.068899 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.068925 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.068936 2041 log.go:172] (0xc00063a500) (5) Data frame sent\nI0505 00:19:32.068945 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.068953 2041 log.go:172] (0xc00063a500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.068980 2041 log.go:172] (0xc00063a500) (5) Data frame sent\nI0505 00:19:32.068991 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.069014 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.069037 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.075908 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.075939 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.075948 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.076497 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.076518 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.076532 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.076542 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.076586 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.076615 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.081777 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.081799 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.081823 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.082277 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.082290 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.082299 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.082310 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.082319 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.082334 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.088834 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.088861 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.088878 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.089456 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.089484 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.089501 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.089511 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.089524 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.089530 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.093725 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.093745 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.093762 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.094085 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.094106 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.094118 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.094133 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.094147 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.094157 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.098530 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.098553 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.098569 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.099236 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.099251 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.099278 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.099289 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.099307 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.099335 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.103276 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.103297 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.103317 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.103626 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.103651 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.103663 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.103680 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.103687 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.103697 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.108038 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.108062 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.108081 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.108518 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.108546 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.108559 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.108575 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.108589 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.108599 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.113523 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.113539 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.113548 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.113750 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.113762 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.113770 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.113781 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.113820 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.113835 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.119712 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.119734 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.119752 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.120571 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.120590 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.120598 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.120609 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.120615 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.120626 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.124961 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.124987 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.125019 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.126017 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.126056 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.126097 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.126116 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.126136 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.126148 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.130165 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.130189 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.130210 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.131020 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.131054 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.131105 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.131135 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.131165 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.131193 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.137323 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.137350 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.137369 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.137884 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.137915 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.137936 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.137972 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.138016 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.138044 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.143042 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.143072 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.143097 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.143406 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.143426 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.143441 2041 log.go:172] (0xc00063a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.245.201:80/\nI0505 00:19:32.143569 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.143589 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.143602 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.150646 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.150677 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.150701 2041 log.go:172] (0xc0006ef180) (3) Data frame sent\nI0505 00:19:32.151351 2041 log.go:172] (0xc00091d080) Data frame received for 5\nI0505 00:19:32.151368 2041 log.go:172] (0xc00063a500) (5) Data frame handling\nI0505 00:19:32.151468 2041 log.go:172] (0xc00091d080) Data frame received for 3\nI0505 00:19:32.151481 2041 log.go:172] (0xc0006ef180) (3) Data frame handling\nI0505 00:19:32.153548 2041 log.go:172] (0xc00091d080) Data frame received for 1\nI0505 00:19:32.153571 2041 log.go:172] (0xc00094e820) (1) Data frame handling\nI0505 00:19:32.153595 2041 log.go:172] (0xc00094e820) (1) Data frame sent\nI0505 00:19:32.153625 2041 log.go:172] (0xc00091d080) (0xc00094e820) Stream removed, broadcasting: 1\nI0505 00:19:32.153782 2041 log.go:172] (0xc00091d080) Go away received\nI0505 00:19:32.154014 2041 log.go:172] (0xc00091d080) (0xc00094e820) Stream removed, broadcasting: 1\nI0505 00:19:32.154033 2041 log.go:172] (0xc00091d080) (0xc0006ef180) Stream removed, broadcasting: 3\nI0505 00:19:32.154042 2041 log.go:172] (0xc00091d080) (0xc00063a500) Stream removed, broadcasting: 5\n" May 5 00:19:32.159: INFO: stdout: "\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228\naffinity-clusterip-transition-bq228" May 5 00:19:32.159: INFO: Received response from host: May 5 00:19:32.159: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.159: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.159: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Received response from host: affinity-clusterip-transition-bq228 May 5 00:19:32.160: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1084, will wait for the garbage collector to delete the pods May 5 00:19:32.431: INFO: Deleting ReplicationController affinity-clusterip-transition took: 9.378305ms May 5 00:19:32.831: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.246607ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:19:45.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1084" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.194 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":131,"skipped":2217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:19:45.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 5 00:19:45.172: INFO: Waiting up to 5m0s for pod "client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22" in namespace "containers-4049" to be "Succeeded or Failed" May 5 00:19:45.183: INFO: Pod "client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565025ms May 5 00:19:47.189: INFO: Pod "client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01671093s May 5 00:19:49.194: INFO: Pod "client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021721016s STEP: Saw pod success May 5 00:19:49.194: INFO: Pod "client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22" satisfied condition "Succeeded or Failed" May 5 00:19:49.197: INFO: Trying to get logs from node latest-worker pod client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22 container test-container: STEP: delete the pod May 5 00:19:49.276: INFO: Waiting for pod client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22 to disappear May 5 00:19:49.284: INFO: Pod client-containers-61f89ff4-9a15-4c7c-86c2-170d8e86fe22 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:19:49.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4049" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2247,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:19:49.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 5 00:19:49.351: INFO: PodSpec: initContainers in spec.initContainers May 5 00:20:38.515: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1cec16c6-0b44-4536-9f37-64ec1e847211", GenerateName:"", Namespace:"init-container-2982", SelfLink:"/api/v1/namespaces/init-container-2982/pods/pod-init-1cec16c6-0b44-4536-9f37-64ec1e847211", UID:"cb7a8dd6-27c3-4a36-b8e3-4661b9fbbaae", ResourceVersion:"1525244", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724234789, loc:(*time.Location)(0x7c2f200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"351883911"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a960a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a960e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a96120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a96140)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-276fz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029a6000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-276fz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-276fz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-276fz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003682098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010f0000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003682120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003682140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003682148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00368214c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234789, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234789, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234789, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234789, loc:(*time.Location)(0x7c2f200)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.5"}}, StartTime:(*v1.Time)(0xc002a961a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010f0150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010f01c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8e2c893bd552849f1a56b7ddd6cd5976488015ac3094d17a75da83ee8af0f4b2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a961e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a961c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0036821cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:20:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2982" for this suite. • [SLOW TEST:49.249 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":133,"skipped":2247,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:20:38.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 5 00:20:39.186: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 5 00:20:41.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234839, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234839, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234839, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234839, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:20:44.253: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:20:44.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:20:45.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9120" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.255 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":134,"skipped":2252,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:20:45.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:20:49.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5869" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2253,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:20:49.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9d48536b-9daa-4887-abb5-fd59b2f28546 STEP: Creating a pod to test consume configMaps May 5 00:20:49.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a" in namespace "configmap-3700" to be "Succeeded or Failed" May 5 00:20:49.984: INFO: Pod "pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.161874ms May 5 00:20:51.989: INFO: Pod "pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023922645s May 5 00:20:53.993: INFO: Pod "pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028550754s STEP: Saw pod success May 5 00:20:53.993: INFO: Pod "pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a" satisfied condition "Succeeded or Failed" May 5 00:20:53.997: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a container configmap-volume-test: STEP: delete the pod May 5 00:20:54.047: INFO: Waiting for pod pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a to disappear May 5 00:20:54.062: INFO: Pod pod-configmaps-07639ea7-0bd5-4b43-9fe2-fc5444262e4a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:20:54.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3700" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:20:54.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 5 00:20:54.152: INFO: Waiting up to 5m0s for pod "pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69" in namespace "emptydir-5453" to be "Succeeded or Failed" May 5 00:20:54.173: INFO: Pod "pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69": Phase="Pending", Reason="", readiness=false. Elapsed: 21.299292ms May 5 00:20:56.275: INFO: Pod "pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123672639s May 5 00:20:58.341: INFO: Pod "pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189697034s STEP: Saw pod success May 5 00:20:58.342: INFO: Pod "pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69" satisfied condition "Succeeded or Failed" May 5 00:20:58.345: INFO: Trying to get logs from node latest-worker pod pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69 container test-container: STEP: delete the pod May 5 00:20:58.403: INFO: Waiting for pod pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69 to disappear May 5 00:20:58.418: INFO: Pod pod-6eb202e0-5918-43f9-a95a-72a7fe18ba69 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:20:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5453" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:20:58.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:20:58.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:21:02.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234858, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234858, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234858, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724234858, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:21:05.918: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:21:06.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4789" for this suite. STEP: Destroying namespace "webhook-4789-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.763 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":138,"skipped":2313,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:21:06.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:21:24.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5503" for this suite. • [SLOW TEST:18.063 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":139,"skipped":2325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:21:24.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 5 00:21:24.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8007' May 5 00:21:27.655: INFO: stderr: "" May 5 00:21:27.656: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 00:21:28.661: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:21:28.661: INFO: Found 0 / 1 May 5 00:21:29.660: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:21:29.660: INFO: Found 0 / 1 May 5 00:21:30.659: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:21:30.659: INFO: Found 1 / 1 May 5 00:21:30.659: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 5 00:21:30.661: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:21:30.661: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 00:21:30.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-p9rhc --namespace=kubectl-8007 -p {"metadata":{"annotations":{"x":"y"}}}' May 5 00:21:30.777: INFO: stderr: "" May 5 00:21:30.777: INFO: stdout: "pod/agnhost-master-p9rhc patched\n" STEP: checking annotations May 5 00:21:30.795: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:21:30.795: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:21:30.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8007" for this suite. • [SLOW TEST:6.548 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":140,"skipped":2349,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:21:30.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 5 00:21:30.898: INFO: Waiting up to 5m0s for pod "client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51" in namespace "containers-4958" to be "Succeeded or Failed" May 5 00:21:30.901: INFO: Pod "client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960727ms May 5 00:21:32.905: INFO: Pod "client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007018229s May 5 00:21:34.908: INFO: Pod "client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010076852s STEP: Saw pod success May 5 00:21:34.908: INFO: Pod "client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51" satisfied condition "Succeeded or Failed" May 5 00:21:34.910: INFO: Trying to get logs from node latest-worker pod client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51 container test-container: STEP: delete the pod May 5 00:21:35.056: INFO: Waiting for pod client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51 to disappear May 5 00:21:35.064: INFO: Pod client-containers-f100494d-a76e-4421-bc32-9d7bf390ff51 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:21:35.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4958" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2367,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:21:35.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 5 00:21:41.682: INFO: Successfully updated pod "labelsupdate79a9d3b5-7614-459d-9813-ef2f744cc1c0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:21:45.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4104" for this suite. • [SLOW TEST:10.718 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:21:45.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-a8ff960c-6403-499e-8421-3bd8b15e8f94 in namespace container-probe-3150 May 5 00:21:49.922: INFO: Started pod test-webserver-a8ff960c-6403-499e-8421-3bd8b15e8f94 in namespace container-probe-3150 STEP: checking the pod's current state and verifying that restartCount is present May 5 00:21:49.926: INFO: Initial restart count of pod test-webserver-a8ff960c-6403-499e-8421-3bd8b15e8f94 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:25:51.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3150" for this suite. • [SLOW TEST:245.410 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2422,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:25:51.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:02.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5297" for this suite. • [SLOW TEST:11.673 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":144,"skipped":2442,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:02.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:07.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7401" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2444,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:07.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3822 STEP: creating service affinity-nodeport-transition in namespace services-3822 STEP: creating replication controller affinity-nodeport-transition in namespace services-3822 I0505 00:26:07.174938 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3822, replica count: 3 I0505 00:26:10.225553 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:26:13.225799 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:26:13.234: INFO: Creating new exec pod May 5 00:26:18.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 5 00:26:18.457: INFO: stderr: "I0505 00:26:18.387601 2107 log.go:172] (0xc00099f4a0) (0xc000a18500) Create stream\nI0505 00:26:18.387661 2107 log.go:172] (0xc00099f4a0) (0xc000a18500) Stream added, broadcasting: 1\nI0505 00:26:18.390435 2107 log.go:172] (0xc00099f4a0) Reply frame received for 1\nI0505 00:26:18.390478 2107 log.go:172] (0xc00099f4a0) (0xc0006bd040) Create stream\nI0505 00:26:18.390490 2107 log.go:172] (0xc00099f4a0) (0xc0006bd040) Stream added, broadcasting: 3\nI0505 00:26:18.391516 2107 log.go:172] (0xc00099f4a0) Reply frame received for 3\nI0505 00:26:18.391555 2107 log.go:172] (0xc00099f4a0) (0xc0006bd5e0) Create stream\nI0505 00:26:18.391564 2107 log.go:172] (0xc00099f4a0) (0xc0006bd5e0) Stream added, broadcasting: 5\nI0505 00:26:18.392610 2107 log.go:172] (0xc00099f4a0) Reply frame received for 5\nI0505 00:26:18.449539 2107 log.go:172] (0xc00099f4a0) Data frame received for 3\nI0505 00:26:18.449609 2107 log.go:172] (0xc0006bd040) (3) Data frame handling\nI0505 00:26:18.449668 2107 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0505 00:26:18.449694 2107 log.go:172] (0xc0006bd5e0) (5) Data frame handling\nI0505 00:26:18.449713 2107 log.go:172] (0xc0006bd5e0) (5) Data frame sent\nI0505 00:26:18.449731 2107 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0505 00:26:18.449749 2107 log.go:172] (0xc0006bd5e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0505 00:26:18.449805 2107 log.go:172] (0xc0006bd5e0) (5) Data frame sent\nI0505 00:26:18.449894 2107 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0505 00:26:18.449916 2107 log.go:172] (0xc0006bd5e0) (5) Data frame handling\nI0505 00:26:18.451788 2107 log.go:172] (0xc00099f4a0) Data frame received for 1\nI0505 00:26:18.451819 2107 log.go:172] (0xc000a18500) (1) Data frame handling\nI0505 00:26:18.451839 2107 log.go:172] (0xc000a18500) (1) Data frame sent\nI0505 00:26:18.451855 2107 log.go:172] (0xc00099f4a0) (0xc000a18500) Stream removed, broadcasting: 1\nI0505 00:26:18.451880 2107 log.go:172] (0xc00099f4a0) Go away received\nI0505 00:26:18.452399 2107 log.go:172] (0xc00099f4a0) (0xc000a18500) Stream removed, broadcasting: 1\nI0505 00:26:18.452427 2107 log.go:172] (0xc00099f4a0) (0xc0006bd040) Stream removed, broadcasting: 3\nI0505 00:26:18.452447 2107 log.go:172] (0xc00099f4a0) (0xc0006bd5e0) Stream removed, broadcasting: 5\n" May 5 00:26:18.457: INFO: stdout: "" May 5 00:26:18.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c nc -zv -t -w 2 10.101.20.32 80' May 5 00:26:18.681: INFO: stderr: "I0505 00:26:18.593805 2130 log.go:172] (0xc00003adc0) (0xc000254000) Create stream\nI0505 00:26:18.593874 2130 log.go:172] (0xc00003adc0) (0xc000254000) Stream added, broadcasting: 1\nI0505 00:26:18.596462 2130 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0505 00:26:18.596495 2130 log.go:172] (0xc00003adc0) (0xc000254f00) Create stream\nI0505 00:26:18.596509 2130 log.go:172] (0xc00003adc0) (0xc000254f00) Stream added, broadcasting: 3\nI0505 00:26:18.598168 2130 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0505 00:26:18.598215 2130 log.go:172] (0xc00003adc0) (0xc000307400) Create stream\nI0505 00:26:18.598233 2130 log.go:172] (0xc00003adc0) (0xc000307400) Stream added, broadcasting: 5\nI0505 00:26:18.599360 2130 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0505 00:26:18.670516 2130 log.go:172] (0xc00003adc0) Data frame received for 5\nI0505 00:26:18.670665 2130 log.go:172] (0xc000307400) (5) Data frame handling\nI0505 00:26:18.670748 2130 log.go:172] (0xc000307400) (5) Data frame sent\nI0505 00:26:18.670843 2130 log.go:172] (0xc00003adc0) Data frame received for 5\nI0505 00:26:18.670875 2130 log.go:172] (0xc000307400) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.20.32 80\nConnection to 10.101.20.32 80 port [tcp/http] succeeded!\nI0505 00:26:18.670982 2130 log.go:172] (0xc00003adc0) Data frame received for 3\nI0505 00:26:18.671021 2130 log.go:172] (0xc000254f00) (3) Data frame handling\nI0505 00:26:18.675387 2130 log.go:172] (0xc00003adc0) Data frame received for 1\nI0505 00:26:18.675421 2130 log.go:172] (0xc000254000) (1) Data frame handling\nI0505 00:26:18.675459 2130 log.go:172] (0xc000254000) (1) Data frame sent\nI0505 00:26:18.675496 2130 log.go:172] (0xc00003adc0) (0xc000254000) Stream removed, broadcasting: 1\nI0505 00:26:18.675528 2130 log.go:172] (0xc00003adc0) Go away received\nI0505 00:26:18.675861 2130 log.go:172] (0xc00003adc0) (0xc000254000) Stream removed, broadcasting: 1\nI0505 00:26:18.675876 2130 log.go:172] (0xc00003adc0) (0xc000254f00) Stream removed, broadcasting: 3\nI0505 00:26:18.675883 2130 log.go:172] (0xc00003adc0) (0xc000307400) Stream removed, broadcasting: 5\n" May 5 00:26:18.681: INFO: stdout: "" May 5 00:26:18.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31924' May 5 00:26:18.893: INFO: stderr: "I0505 00:26:18.813725 2150 log.go:172] (0xc0009c9810) (0xc000aa0460) Create stream\nI0505 00:26:18.813787 2150 log.go:172] (0xc0009c9810) (0xc000aa0460) Stream added, broadcasting: 1\nI0505 00:26:18.816925 2150 log.go:172] (0xc0009c9810) Reply frame received for 1\nI0505 00:26:18.816971 2150 log.go:172] (0xc0009c9810) (0xc00070a640) Create stream\nI0505 00:26:18.816986 2150 log.go:172] (0xc0009c9810) (0xc00070a640) Stream added, broadcasting: 3\nI0505 00:26:18.818021 2150 log.go:172] (0xc0009c9810) Reply frame received for 3\nI0505 00:26:18.818053 2150 log.go:172] (0xc0009c9810) (0xc0006865a0) Create stream\nI0505 00:26:18.818062 2150 log.go:172] (0xc0009c9810) (0xc0006865a0) Stream added, broadcasting: 5\nI0505 00:26:18.818713 2150 log.go:172] (0xc0009c9810) Reply frame received for 5\nI0505 00:26:18.886276 2150 log.go:172] (0xc0009c9810) Data frame received for 5\nI0505 00:26:18.886326 2150 log.go:172] (0xc0006865a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31924\nConnection to 172.17.0.13 31924 port [tcp/31924] succeeded!\nI0505 00:26:18.886354 2150 log.go:172] (0xc0009c9810) Data frame received for 3\nI0505 00:26:18.886394 2150 log.go:172] (0xc00070a640) (3) Data frame handling\nI0505 00:26:18.886431 2150 log.go:172] (0xc0006865a0) (5) Data frame sent\nI0505 00:26:18.886457 2150 log.go:172] (0xc0009c9810) Data frame received for 5\nI0505 00:26:18.886477 2150 log.go:172] (0xc0006865a0) (5) Data frame handling\nI0505 00:26:18.888067 2150 log.go:172] (0xc0009c9810) Data frame received for 1\nI0505 00:26:18.888103 2150 log.go:172] (0xc000aa0460) (1) Data frame handling\nI0505 00:26:18.888124 2150 log.go:172] (0xc000aa0460) (1) Data frame sent\nI0505 00:26:18.888143 2150 log.go:172] (0xc0009c9810) (0xc000aa0460) Stream removed, broadcasting: 1\nI0505 00:26:18.888165 2150 log.go:172] (0xc0009c9810) Go away received\nI0505 00:26:18.888582 2150 log.go:172] (0xc0009c9810) (0xc000aa0460) Stream removed, broadcasting: 1\nI0505 00:26:18.888609 2150 log.go:172] (0xc0009c9810) (0xc00070a640) Stream removed, broadcasting: 3\nI0505 00:26:18.888626 2150 log.go:172] (0xc0009c9810) (0xc0006865a0) Stream removed, broadcasting: 5\n" May 5 00:26:18.893: INFO: stdout: "" May 5 00:26:18.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31924' May 5 00:26:19.130: INFO: stderr: "I0505 00:26:19.035093 2172 log.go:172] (0xc00050cfd0) (0xc000b7c6e0) Create stream\nI0505 00:26:19.035158 2172 log.go:172] (0xc00050cfd0) (0xc000b7c6e0) Stream added, broadcasting: 1\nI0505 00:26:19.044670 2172 log.go:172] (0xc00050cfd0) Reply frame received for 1\nI0505 00:26:19.044730 2172 log.go:172] (0xc00050cfd0) (0xc000532f00) Create stream\nI0505 00:26:19.044742 2172 log.go:172] (0xc00050cfd0) (0xc000532f00) Stream added, broadcasting: 3\nI0505 00:26:19.046902 2172 log.go:172] (0xc00050cfd0) Reply frame received for 3\nI0505 00:26:19.046946 2172 log.go:172] (0xc00050cfd0) (0xc0000f3b80) Create stream\nI0505 00:26:19.046959 2172 log.go:172] (0xc00050cfd0) (0xc0000f3b80) Stream added, broadcasting: 5\nI0505 00:26:19.047673 2172 log.go:172] (0xc00050cfd0) Reply frame received for 5\nI0505 00:26:19.122338 2172 log.go:172] (0xc00050cfd0) Data frame received for 3\nI0505 00:26:19.122396 2172 log.go:172] (0xc000532f00) (3) Data frame handling\nI0505 00:26:19.122434 2172 log.go:172] (0xc00050cfd0) Data frame received for 5\nI0505 00:26:19.122458 2172 log.go:172] (0xc0000f3b80) (5) Data frame handling\nI0505 00:26:19.122492 2172 log.go:172] (0xc0000f3b80) (5) Data frame sent\nI0505 00:26:19.122510 2172 log.go:172] (0xc00050cfd0) Data frame received for 5\nI0505 00:26:19.122521 2172 log.go:172] (0xc0000f3b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31924\nConnection to 172.17.0.12 31924 port [tcp/31924] succeeded!\nI0505 00:26:19.124089 2172 log.go:172] (0xc00050cfd0) Data frame received for 1\nI0505 00:26:19.124112 2172 log.go:172] (0xc000b7c6e0) (1) Data frame handling\nI0505 00:26:19.124137 2172 log.go:172] (0xc000b7c6e0) (1) Data frame sent\nI0505 00:26:19.124156 2172 log.go:172] (0xc00050cfd0) (0xc000b7c6e0) Stream removed, broadcasting: 1\nI0505 00:26:19.124192 2172 log.go:172] (0xc00050cfd0) Go away received\nI0505 00:26:19.124581 2172 log.go:172] (0xc00050cfd0) (0xc000b7c6e0) Stream removed, broadcasting: 1\nI0505 00:26:19.124599 2172 log.go:172] (0xc00050cfd0) (0xc000532f00) Stream removed, broadcasting: 3\nI0505 00:26:19.124609 2172 log.go:172] (0xc00050cfd0) (0xc0000f3b80) Stream removed, broadcasting: 5\n" May 5 00:26:19.130: INFO: stdout: "" May 5 00:26:19.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31924/ ; done' May 5 00:26:19.439: INFO: stderr: "I0505 00:26:19.273494 2192 log.go:172] (0xc0009a71e0) (0xc0003b3180) Create stream\nI0505 00:26:19.273547 2192 log.go:172] (0xc0009a71e0) (0xc0003b3180) Stream added, broadcasting: 1\nI0505 00:26:19.276084 2192 log.go:172] (0xc0009a71e0) Reply frame received for 1\nI0505 00:26:19.276126 2192 log.go:172] (0xc0009a71e0) (0xc000b92000) Create stream\nI0505 00:26:19.276137 2192 log.go:172] (0xc0009a71e0) (0xc000b92000) Stream added, broadcasting: 3\nI0505 00:26:19.277061 2192 log.go:172] (0xc0009a71e0) Reply frame received for 3\nI0505 00:26:19.277089 2192 log.go:172] (0xc0009a71e0) (0xc00090b720) Create stream\nI0505 00:26:19.277099 2192 log.go:172] (0xc0009a71e0) (0xc00090b720) Stream added, broadcasting: 5\nI0505 00:26:19.278162 2192 log.go:172] (0xc0009a71e0) Reply frame received for 5\nI0505 00:26:19.336784 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.336842 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.336868 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.336896 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.336909 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.336919 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.341533 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.341552 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.341563 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.341879 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.341932 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.341971 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.341994 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.342013 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.342036 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.345940 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.345962 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.345987 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.346003 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.346012 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.346020 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.346036 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.346049 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.346079 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.349896 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.349920 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.349942 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.350254 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.350278 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.350292 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.350320 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.350341 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.350359 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.356534 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.356560 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.356579 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.357312 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.357337 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.357349 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.357369 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.357406 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.357446 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.365549 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.365602 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.365619 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.365657 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.365676 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.365703 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.365713 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.365721 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.365750 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.367770 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.367799 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.367840 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.368336 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.368356 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.368381 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.368397 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.368420 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.368430 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.376085 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.376100 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.376113 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.376636 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.376660 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.376674 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.376696 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.376709 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.376726 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.381638 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.381652 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.381667 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.382211 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.382239 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.382254 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.382277 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.382286 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.382294 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.388766 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.388784 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.388808 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.389353 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.389388 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.389404 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.389421 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.389436 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.389454 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.394278 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.394294 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.394317 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.394561 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.394575 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.394586 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.394596 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.394608 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.394615 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.398233 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.398247 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.398255 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.398619 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.398633 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.398644 2192 log.go:172] (0xc00090b720) (5) Data frame sent\nI0505 00:26:19.398651 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.398661 2192 log.go:172] (0xc00090b720) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.398703 2192 log.go:172] (0xc00090b720) (5) Data frame sent\nI0505 00:26:19.398832 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.398850 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.398870 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.402950 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.402967 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.402978 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.403284 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.403300 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.403309 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.403346 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.403381 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.403428 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.408144 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.408155 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.408162 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.408972 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.408997 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.409009 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.409021 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.409031 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.409048 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.413930 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.413967 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.413989 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.414858 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.414882 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.414897 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.417583 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.417620 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.417642 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.425988 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.426011 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.426023 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.426142 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.426159 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.426170 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.426187 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.426194 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.426202 2192 log.go:172] (0xc00090b720) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.433385 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.433397 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.433407 2192 log.go:172] (0xc000b92000) (3) Data frame sent\nI0505 00:26:19.433976 2192 log.go:172] (0xc0009a71e0) Data frame received for 3\nI0505 00:26:19.433993 2192 log.go:172] (0xc000b92000) (3) Data frame handling\nI0505 00:26:19.434099 2192 log.go:172] (0xc0009a71e0) Data frame received for 5\nI0505 00:26:19.434119 2192 log.go:172] (0xc00090b720) (5) Data frame handling\nI0505 00:26:19.435434 2192 log.go:172] (0xc0009a71e0) Data frame received for 1\nI0505 00:26:19.435447 2192 log.go:172] (0xc0003b3180) (1) Data frame handling\nI0505 00:26:19.435463 2192 log.go:172] (0xc0003b3180) (1) Data frame sent\nI0505 00:26:19.435476 2192 log.go:172] (0xc0009a71e0) (0xc0003b3180) Stream removed, broadcasting: 1\nI0505 00:26:19.435486 2192 log.go:172] (0xc0009a71e0) Go away received\nI0505 00:26:19.435815 2192 log.go:172] (0xc0009a71e0) (0xc0003b3180) Stream removed, broadcasting: 1\nI0505 00:26:19.435829 2192 log.go:172] (0xc0009a71e0) (0xc000b92000) Stream removed, broadcasting: 3\nI0505 00:26:19.435835 2192 log.go:172] (0xc0009a71e0) (0xc00090b720) Stream removed, broadcasting: 5\n" May 5 00:26:19.440: INFO: stdout: "\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-srt6k\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-697z5\naffinity-nodeport-transition-srt6k\naffinity-nodeport-transition-qgpf2" May 5 00:26:19.440: INFO: Received response from host: May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-srt6k May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-697z5 May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-srt6k May 5 00:26:19.440: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3822 execpod-affinity8npbp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31924/ ; done' May 5 00:26:19.749: INFO: stderr: "I0505 00:26:19.571939 2209 log.go:172] (0xc0000e0d10) (0xc000a963c0) Create stream\nI0505 00:26:19.571983 2209 log.go:172] (0xc0000e0d10) (0xc000a963c0) Stream added, broadcasting: 1\nI0505 00:26:19.573389 2209 log.go:172] (0xc0000e0d10) Reply frame received for 1\nI0505 00:26:19.573425 2209 log.go:172] (0xc0000e0d10) (0xc000a96460) Create stream\nI0505 00:26:19.573438 2209 log.go:172] (0xc0000e0d10) (0xc000a96460) Stream added, broadcasting: 3\nI0505 00:26:19.574350 2209 log.go:172] (0xc0000e0d10) Reply frame received for 3\nI0505 00:26:19.574378 2209 log.go:172] (0xc0000e0d10) (0xc0006c05a0) Create stream\nI0505 00:26:19.574391 2209 log.go:172] (0xc0000e0d10) (0xc0006c05a0) Stream added, broadcasting: 5\nI0505 00:26:19.575069 2209 log.go:172] (0xc0000e0d10) Reply frame received for 5\nI0505 00:26:19.636809 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.636860 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.636879 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.636903 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.636922 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.636949 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.643445 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.643478 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.643506 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.643823 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.643839 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.643850 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.643959 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.643989 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.644007 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.650990 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.651032 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.651053 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.651422 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.651452 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.651470 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.651493 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.651510 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.651533 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.658199 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.658214 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.658228 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.658648 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.658663 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.658679 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.658770 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.658793 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.658817 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.665510 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.665527 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.665540 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.665832 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.665851 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.665872 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.665884 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.665892 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.665911 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.671752 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.671766 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.671780 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.672670 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.672689 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.672707 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.672734 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.672745 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.672760 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.677780 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.677810 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.677829 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.678267 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.678281 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.678289 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.678311 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.678328 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.678351 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.683410 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.683426 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.683440 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.683764 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.683794 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.683819 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.683838 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.683849 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.683858 2209 log.go:172] (0xc000a96460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.689767 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.689784 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.689799 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.690485 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.690509 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.690532 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.690541 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.690554 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.690562 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.697520 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.697551 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.697579 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.697929 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.697968 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.697991 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.698020 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.698045 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.698071 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.698090 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.698105 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.698138 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.702800 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.702823 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.702840 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.703655 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.703674 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.703700 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.703741 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.703757 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.703783 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.709731 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.709763 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.709792 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.710291 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.710308 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.710319 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.710339 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.710368 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.710394 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.716510 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.716537 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.716559 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.717008 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.717075 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\n+ echo\nI0505 00:26:19.717098 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.717317 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.717355 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.717375 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.717396 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.717416 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.717443 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.722181 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.722214 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.722254 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.722779 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.722814 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.722828 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.722848 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.722857 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.722869 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.727446 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.727556 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.727603 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.727887 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.727910 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.727931 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.727962 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.727993 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.728015 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.728031 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.728045 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.728074 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\nI0505 00:26:19.734233 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.734261 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.734281 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.735287 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.735320 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.735341 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.735367 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.735383 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.735403 2209 log.go:172] (0xc0006c05a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31924/\nI0505 00:26:19.741807 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.741862 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.741883 2209 log.go:172] (0xc000a96460) (3) Data frame sent\nI0505 00:26:19.742397 2209 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0505 00:26:19.742423 2209 log.go:172] (0xc000a96460) (3) Data frame handling\nI0505 00:26:19.742464 2209 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0505 00:26:19.742481 2209 log.go:172] (0xc0006c05a0) (5) Data frame handling\nI0505 00:26:19.744527 2209 log.go:172] (0xc0000e0d10) Data frame received for 1\nI0505 00:26:19.744549 2209 log.go:172] (0xc000a963c0) (1) Data frame handling\nI0505 00:26:19.744577 2209 log.go:172] (0xc000a963c0) (1) Data frame sent\nI0505 00:26:19.744605 2209 log.go:172] (0xc0000e0d10) (0xc000a963c0) Stream removed, broadcasting: 1\nI0505 00:26:19.744819 2209 log.go:172] (0xc0000e0d10) Go away received\nI0505 00:26:19.745022 2209 log.go:172] (0xc0000e0d10) (0xc000a963c0) Stream removed, broadcasting: 1\nI0505 00:26:19.745043 2209 log.go:172] (0xc0000e0d10) (0xc000a96460) Stream removed, broadcasting: 3\nI0505 00:26:19.745056 2209 log.go:172] (0xc0000e0d10) (0xc0006c05a0) Stream removed, broadcasting: 5\n" May 5 00:26:19.750: INFO: stdout: "\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2\naffinity-nodeport-transition-qgpf2" May 5 00:26:19.750: INFO: Received response from host: May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.750: INFO: Received response from host: affinity-nodeport-transition-qgpf2 May 5 00:26:19.751: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3822, will wait for the garbage collector to delete the pods May 5 00:26:19.875: INFO: Deleting ReplicationController affinity-nodeport-transition took: 30.141672ms May 5 00:26:20.476: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.285219ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:35.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3822" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.318 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":146,"skipped":2452,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:35.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8219/secret-test-e6472eaf-cd9b-4d21-9062-95ff50d813e3 STEP: Creating a pod to test consume secrets May 5 00:26:35.426: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a" in namespace "secrets-8219" to be "Succeeded or Failed" May 5 00:26:35.488: INFO: Pod "pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a": Phase="Pending", Reason="", readiness=false. Elapsed: 62.287675ms May 5 00:26:37.506: INFO: Pod "pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080420341s May 5 00:26:39.510: INFO: Pod "pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084569141s STEP: Saw pod success May 5 00:26:39.510: INFO: Pod "pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a" satisfied condition "Succeeded or Failed" May 5 00:26:39.514: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a container env-test: STEP: delete the pod May 5 00:26:39.620: INFO: Waiting for pod pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a to disappear May 5 00:26:39.627: INFO: Pod pod-configmaps-fa2d14ba-258a-46bb-ad63-76899888272a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:39.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8219" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2456,"failed":0} ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:39.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 5 00:26:39.769: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5785 /api/v1/namespaces/watch-5785/configmaps/e2e-watch-test-resource-version c6665074-15e2-4cdc-bf7a-3c269aa07807 1526866 0 2020-05-05 00:26:39 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-05 00:26:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 5 00:26:39.770: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5785 /api/v1/namespaces/watch-5785/configmaps/e2e-watch-test-resource-version c6665074-15e2-4cdc-bf7a-3c269aa07807 1526867 0 2020-05-05 00:26:39 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-05 00:26:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:39.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5785" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":148,"skipped":2456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:39.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:26:39.828: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8283 I0505 00:26:39.842321 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8283, replica count: 1 I0505 00:26:40.892724 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:26:41.892946 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:26:42.893305 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:26:43.893554 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:26:44.026: INFO: Created: latency-svc-fb27t May 5 00:26:44.075: INFO: Got endpoints: latency-svc-fb27t [81.431828ms] May 5 00:26:44.164: INFO: Created: latency-svc-597hz May 5 00:26:44.180: INFO: Got endpoints: latency-svc-597hz [105.435349ms] May 5 00:26:44.200: INFO: Created: latency-svc-wv8gx May 5 00:26:44.216: INFO: Got endpoints: latency-svc-wv8gx [140.774018ms] May 5 00:26:44.290: INFO: Created: latency-svc-5954j May 5 00:26:44.295: INFO: Got endpoints: latency-svc-5954j [219.782592ms] May 5 00:26:44.325: INFO: Created: latency-svc-6p9tq May 5 00:26:44.355: INFO: Got endpoints: latency-svc-6p9tq [279.968939ms] May 5 00:26:44.385: INFO: Created: latency-svc-ptlbm May 5 00:26:44.458: INFO: Got endpoints: latency-svc-ptlbm [383.119582ms] May 5 00:26:44.488: INFO: Created: latency-svc-r82fx May 5 00:26:44.505: INFO: Got endpoints: latency-svc-r82fx [430.145404ms] May 5 00:26:44.524: INFO: Created: latency-svc-jc8lf May 5 00:26:44.535: INFO: Got endpoints: latency-svc-jc8lf [459.650105ms] May 5 00:26:44.602: INFO: Created: latency-svc-9tzb7 May 5 00:26:44.606: INFO: Got endpoints: latency-svc-9tzb7 [530.57475ms] May 5 00:26:44.643: INFO: Created: latency-svc-4vknf May 5 00:26:44.655: INFO: Got endpoints: latency-svc-4vknf [580.047523ms] May 5 00:26:44.679: INFO: Created: latency-svc-2d9zz May 5 00:26:44.692: INFO: Got endpoints: latency-svc-2d9zz [616.783533ms] May 5 00:26:44.757: INFO: Created: latency-svc-bj4nv May 5 00:26:44.764: INFO: Got endpoints: latency-svc-bj4nv [688.990055ms] May 5 00:26:44.794: INFO: Created: latency-svc-lmqdm May 5 00:26:44.806: INFO: Got endpoints: latency-svc-lmqdm [730.898872ms] May 5 00:26:44.900: INFO: Created: latency-svc-dgjmn May 5 00:26:44.931: INFO: Created: latency-svc-dxnq8 May 5 00:26:44.932: INFO: Got endpoints: latency-svc-dgjmn [856.522108ms] May 5 00:26:44.945: INFO: Got endpoints: latency-svc-dxnq8 [869.567284ms] May 5 00:26:44.997: INFO: Created: latency-svc-xvzsr May 5 00:26:45.044: INFO: Got endpoints: latency-svc-xvzsr [968.906472ms] May 5 00:26:45.082: INFO: Created: latency-svc-4625c May 5 00:26:45.095: INFO: Got endpoints: latency-svc-4625c [914.704642ms] May 5 00:26:45.201: INFO: Created: latency-svc-rkv5l May 5 00:26:45.206: INFO: Got endpoints: latency-svc-rkv5l [990.376759ms] May 5 00:26:45.243: INFO: Created: latency-svc-d628v May 5 00:26:45.259: INFO: Got endpoints: latency-svc-d628v [964.196352ms] May 5 00:26:45.344: INFO: Created: latency-svc-l2v58 May 5 00:26:45.352: INFO: Got endpoints: latency-svc-l2v58 [996.786158ms] May 5 00:26:45.405: INFO: Created: latency-svc-t6mjf May 5 00:26:45.441: INFO: Got endpoints: latency-svc-t6mjf [982.895923ms] May 5 00:26:45.442: INFO: Created: latency-svc-z6lk8 May 5 00:26:45.511: INFO: Got endpoints: latency-svc-z6lk8 [1.005324026s] May 5 00:26:45.543: INFO: Created: latency-svc-wt5lw May 5 00:26:45.558: INFO: Got endpoints: latency-svc-wt5lw [1.023686774s] May 5 00:26:45.662: INFO: Created: latency-svc-thrln May 5 00:26:45.673: INFO: Got endpoints: latency-svc-thrln [1.067201367s] May 5 00:26:45.700: INFO: Created: latency-svc-trm8n May 5 00:26:45.715: INFO: Got endpoints: latency-svc-trm8n [1.060123399s] May 5 00:26:45.740: INFO: Created: latency-svc-rp7zl May 5 00:26:45.757: INFO: Got endpoints: latency-svc-rp7zl [1.064947972s] May 5 00:26:45.825: INFO: Created: latency-svc-ftlqd May 5 00:26:45.835: INFO: Got endpoints: latency-svc-ftlqd [1.070902277s] May 5 00:26:45.879: INFO: Created: latency-svc-ctftg May 5 00:26:45.902: INFO: Got endpoints: latency-svc-ctftg [1.096077793s] May 5 00:26:45.952: INFO: Created: latency-svc-b5lnr May 5 00:26:45.968: INFO: Got endpoints: latency-svc-b5lnr [1.036289209s] May 5 00:26:46.016: INFO: Created: latency-svc-z7tdx May 5 00:26:46.105: INFO: Got endpoints: latency-svc-z7tdx [1.159940064s] May 5 00:26:46.155: INFO: Created: latency-svc-nzl64 May 5 00:26:46.166: INFO: Got endpoints: latency-svc-nzl64 [1.121911722s] May 5 00:26:46.274: INFO: Created: latency-svc-g7nfp May 5 00:26:46.305: INFO: Got endpoints: latency-svc-g7nfp [1.209880619s] May 5 00:26:46.347: INFO: Created: latency-svc-576hq May 5 00:26:46.358: INFO: Got endpoints: latency-svc-576hq [1.151931411s] May 5 00:26:46.442: INFO: Created: latency-svc-sh26w May 5 00:26:46.455: INFO: Got endpoints: latency-svc-sh26w [1.195742572s] May 5 00:26:46.565: INFO: Created: latency-svc-wkdjf May 5 00:26:46.593: INFO: Got endpoints: latency-svc-wkdjf [1.241342762s] May 5 00:26:46.638: INFO: Created: latency-svc-9r6qr May 5 00:26:46.715: INFO: Got endpoints: latency-svc-9r6qr [1.274253549s] May 5 00:26:46.767: INFO: Created: latency-svc-kmgkb May 5 00:26:46.792: INFO: Got endpoints: latency-svc-kmgkb [1.281331443s] May 5 00:26:46.864: INFO: Created: latency-svc-9gmzx May 5 00:26:46.876: INFO: Got endpoints: latency-svc-9gmzx [1.31784099s] May 5 00:26:46.910: INFO: Created: latency-svc-fczjp May 5 00:26:46.943: INFO: Got endpoints: latency-svc-fczjp [1.269836078s] May 5 00:26:47.051: INFO: Created: latency-svc-vqwfp May 5 00:26:47.056: INFO: Got endpoints: latency-svc-vqwfp [1.340681031s] May 5 00:26:47.108: INFO: Created: latency-svc-5kkkq May 5 00:26:47.176: INFO: Got endpoints: latency-svc-5kkkq [1.419387519s] May 5 00:26:47.211: INFO: Created: latency-svc-wgwsw May 5 00:26:47.225: INFO: Got endpoints: latency-svc-wgwsw [1.389911952s] May 5 00:26:47.338: INFO: Created: latency-svc-wbbh7 May 5 00:26:47.426: INFO: Got endpoints: latency-svc-wbbh7 [1.524087464s] May 5 00:26:47.486: INFO: Created: latency-svc-6s47w May 5 00:26:47.501: INFO: Got endpoints: latency-svc-6s47w [1.533525193s] May 5 00:26:47.553: INFO: Created: latency-svc-4wj7n May 5 00:26:47.568: INFO: Got endpoints: latency-svc-4wj7n [1.463394014s] May 5 00:26:47.659: INFO: Created: latency-svc-rhhls May 5 00:26:47.676: INFO: Got endpoints: latency-svc-rhhls [1.509830187s] May 5 00:26:47.715: INFO: Created: latency-svc-mmbn6 May 5 00:26:47.799: INFO: Got endpoints: latency-svc-mmbn6 [1.493405133s] May 5 00:26:47.816: INFO: Created: latency-svc-pjcc8 May 5 00:26:47.827: INFO: Got endpoints: latency-svc-pjcc8 [1.468188301s] May 5 00:26:47.875: INFO: Created: latency-svc-4mjn4 May 5 00:26:47.936: INFO: Got endpoints: latency-svc-4mjn4 [1.481150762s] May 5 00:26:48.003: INFO: Created: latency-svc-6rnc9 May 5 00:26:48.092: INFO: Got endpoints: latency-svc-6rnc9 [1.498767086s] May 5 00:26:48.127: INFO: Created: latency-svc-4snjg May 5 00:26:48.146: INFO: Got endpoints: latency-svc-4snjg [1.430995279s] May 5 00:26:48.171: INFO: Created: latency-svc-dv4sr May 5 00:26:48.189: INFO: Got endpoints: latency-svc-dv4sr [1.396615295s] May 5 00:26:48.262: INFO: Created: latency-svc-sz74l May 5 00:26:48.279: INFO: Got endpoints: latency-svc-sz74l [1.402275451s] May 5 00:26:48.319: INFO: Created: latency-svc-dvs87 May 5 00:26:48.333: INFO: Got endpoints: latency-svc-dvs87 [1.390276588s] May 5 00:26:48.355: INFO: Created: latency-svc-tlttj May 5 00:26:48.445: INFO: Got endpoints: latency-svc-tlttj [1.389226783s] May 5 00:26:48.448: INFO: Created: latency-svc-tzxb9 May 5 00:26:48.482: INFO: Got endpoints: latency-svc-tzxb9 [1.306175824s] May 5 00:26:48.595: INFO: Created: latency-svc-p799h May 5 00:26:48.601: INFO: Got endpoints: latency-svc-p799h [1.376336868s] May 5 00:26:48.631: INFO: Created: latency-svc-xkkpj May 5 00:26:48.650: INFO: Got endpoints: latency-svc-xkkpj [1.223520603s] May 5 00:26:48.675: INFO: Created: latency-svc-ql6gc May 5 00:26:48.692: INFO: Got endpoints: latency-svc-ql6gc [1.19092218s] May 5 00:26:48.746: INFO: Created: latency-svc-hkh2m May 5 00:26:48.788: INFO: Got endpoints: latency-svc-hkh2m [1.220189729s] May 5 00:26:48.823: INFO: Created: latency-svc-555fl May 5 00:26:48.837: INFO: Got endpoints: latency-svc-555fl [1.161300485s] May 5 00:26:48.898: INFO: Created: latency-svc-j9jsq May 5 00:26:48.899: INFO: Got endpoints: latency-svc-j9jsq [1.10045679s] May 5 00:26:48.932: INFO: Created: latency-svc-7tp5b May 5 00:26:48.946: INFO: Got endpoints: latency-svc-7tp5b [1.11943123s] May 5 00:26:48.968: INFO: Created: latency-svc-r58d7 May 5 00:26:48.982: INFO: Got endpoints: latency-svc-r58d7 [1.046304862s] May 5 00:26:49.039: INFO: Created: latency-svc-jtmn4 May 5 00:26:49.044: INFO: Got endpoints: latency-svc-jtmn4 [952.097614ms] May 5 00:26:49.075: INFO: Created: latency-svc-w8mxl May 5 00:26:49.091: INFO: Got endpoints: latency-svc-w8mxl [944.753601ms] May 5 00:26:49.117: INFO: Created: latency-svc-gkp9l May 5 00:26:49.127: INFO: Got endpoints: latency-svc-gkp9l [938.511201ms] May 5 00:26:49.182: INFO: Created: latency-svc-jhhk9 May 5 00:26:49.184: INFO: Got endpoints: latency-svc-jhhk9 [905.37605ms] May 5 00:26:49.214: INFO: Created: latency-svc-8wblm May 5 00:26:49.224: INFO: Got endpoints: latency-svc-8wblm [891.117746ms] May 5 00:26:49.250: INFO: Created: latency-svc-554w9 May 5 00:26:49.266: INFO: Got endpoints: latency-svc-554w9 [820.843693ms] May 5 00:26:49.350: INFO: Created: latency-svc-8r8nc May 5 00:26:49.353: INFO: Got endpoints: latency-svc-8r8nc [870.533942ms] May 5 00:26:49.411: INFO: Created: latency-svc-kzsgg May 5 00:26:49.442: INFO: Got endpoints: latency-svc-kzsgg [840.753333ms] May 5 00:26:49.511: INFO: Created: latency-svc-bl88w May 5 00:26:49.538: INFO: Got endpoints: latency-svc-bl88w [888.099618ms] May 5 00:26:49.598: INFO: Created: latency-svc-w82bp May 5 00:26:49.673: INFO: Got endpoints: latency-svc-w82bp [980.615234ms] May 5 00:26:49.676: INFO: Created: latency-svc-klkdg May 5 00:26:49.682: INFO: Got endpoints: latency-svc-klkdg [893.954514ms] May 5 00:26:49.772: INFO: Created: latency-svc-gbv6x May 5 00:26:49.846: INFO: Got endpoints: latency-svc-gbv6x [1.009090437s] May 5 00:26:49.849: INFO: Created: latency-svc-zrhcs May 5 00:26:49.857: INFO: Got endpoints: latency-svc-zrhcs [958.038874ms] May 5 00:26:49.879: INFO: Created: latency-svc-b5msx May 5 00:26:49.894: INFO: Got endpoints: latency-svc-b5msx [947.721671ms] May 5 00:26:49.915: INFO: Created: latency-svc-76g98 May 5 00:26:49.940: INFO: Got endpoints: latency-svc-76g98 [957.294881ms] May 5 00:26:50.012: INFO: Created: latency-svc-q6577 May 5 00:26:50.042: INFO: Got endpoints: latency-svc-q6577 [997.916385ms] May 5 00:26:50.084: INFO: Created: latency-svc-sljd4 May 5 00:26:50.140: INFO: Got endpoints: latency-svc-sljd4 [1.048531729s] May 5 00:26:50.154: INFO: Created: latency-svc-kd6k7 May 5 00:26:50.171: INFO: Got endpoints: latency-svc-kd6k7 [1.043947853s] May 5 00:26:50.196: INFO: Created: latency-svc-wj79m May 5 00:26:50.208: INFO: Got endpoints: latency-svc-wj79m [1.023802387s] May 5 00:26:50.311: INFO: Created: latency-svc-228b2 May 5 00:26:50.360: INFO: Got endpoints: latency-svc-228b2 [1.135780315s] May 5 00:26:50.454: INFO: Created: latency-svc-6s6w7 May 5 00:26:50.461: INFO: Got endpoints: latency-svc-6s6w7 [1.194090841s] May 5 00:26:50.515: INFO: Created: latency-svc-txh87 May 5 00:26:50.539: INFO: Got endpoints: latency-svc-txh87 [1.185810184s] May 5 00:26:50.607: INFO: Created: latency-svc-gg9f2 May 5 00:26:50.610: INFO: Got endpoints: latency-svc-gg9f2 [1.168355834s] May 5 00:26:50.670: INFO: Created: latency-svc-m99jj May 5 00:26:50.683: INFO: Got endpoints: latency-svc-m99jj [1.145150211s] May 5 00:26:50.791: INFO: Created: latency-svc-f5pkc May 5 00:26:50.798: INFO: Got endpoints: latency-svc-f5pkc [1.124531674s] May 5 00:26:50.821: INFO: Created: latency-svc-bggfx May 5 00:26:50.840: INFO: Got endpoints: latency-svc-bggfx [1.157812396s] May 5 00:26:50.937: INFO: Created: latency-svc-hmn8f May 5 00:26:50.940: INFO: Got endpoints: latency-svc-hmn8f [1.093401523s] May 5 00:26:50.964: INFO: Created: latency-svc-kz82r May 5 00:26:50.981: INFO: Got endpoints: latency-svc-kz82r [1.123401429s] May 5 00:26:51.000: INFO: Created: latency-svc-wz5zk May 5 00:26:51.023: INFO: Got endpoints: latency-svc-wz5zk [1.129366701s] May 5 00:26:51.086: INFO: Created: latency-svc-bxlk5 May 5 00:26:51.089: INFO: Got endpoints: latency-svc-bxlk5 [1.149727229s] May 5 00:26:51.145: INFO: Created: latency-svc-b2mvb May 5 00:26:51.155: INFO: Got endpoints: latency-svc-b2mvb [1.113236381s] May 5 00:26:51.181: INFO: Created: latency-svc-79rv9 May 5 00:26:51.247: INFO: Got endpoints: latency-svc-79rv9 [1.107672748s] May 5 00:26:51.250: INFO: Created: latency-svc-pmvwr May 5 00:26:51.276: INFO: Got endpoints: latency-svc-pmvwr [1.104631162s] May 5 00:26:51.312: INFO: Created: latency-svc-p24gt May 5 00:26:51.331: INFO: Got endpoints: latency-svc-p24gt [1.122630919s] May 5 00:26:51.415: INFO: Created: latency-svc-v8npk May 5 00:26:51.439: INFO: Got endpoints: latency-svc-v8npk [1.078843376s] May 5 00:26:51.492: INFO: Created: latency-svc-tth7g May 5 00:26:51.547: INFO: Got endpoints: latency-svc-tth7g [1.086447637s] May 5 00:26:51.551: INFO: Created: latency-svc-495zc May 5 00:26:51.583: INFO: Got endpoints: latency-svc-495zc [1.044120127s] May 5 00:26:51.628: INFO: Created: latency-svc-v7fst May 5 00:26:51.715: INFO: Got endpoints: latency-svc-v7fst [1.10438538s] May 5 00:26:51.717: INFO: Created: latency-svc-5jrz5 May 5 00:26:51.744: INFO: Got endpoints: latency-svc-5jrz5 [1.060518364s] May 5 00:26:51.864: INFO: Created: latency-svc-qm65f May 5 00:26:51.869: INFO: Got endpoints: latency-svc-qm65f [1.071348735s] May 5 00:26:51.902: INFO: Created: latency-svc-nbxlg May 5 00:26:51.909: INFO: Got endpoints: latency-svc-nbxlg [1.0687765s] May 5 00:26:51.937: INFO: Created: latency-svc-64w68 May 5 00:26:51.945: INFO: Got endpoints: latency-svc-64w68 [1.005458593s] May 5 00:26:52.031: INFO: Created: latency-svc-g9zvc May 5 00:26:52.032: INFO: Got endpoints: latency-svc-g9zvc [1.051755682s] May 5 00:26:52.094: INFO: Created: latency-svc-hkrk7 May 5 00:26:52.110: INFO: Got endpoints: latency-svc-hkrk7 [1.086773917s] May 5 00:26:52.176: INFO: Created: latency-svc-78qmm May 5 00:26:52.179: INFO: Got endpoints: latency-svc-78qmm [1.090121091s] May 5 00:26:52.242: INFO: Created: latency-svc-qzz8t May 5 00:26:52.259: INFO: Got endpoints: latency-svc-qzz8t [1.10365239s] May 5 00:26:52.332: INFO: Created: latency-svc-kztl9 May 5 00:26:52.343: INFO: Got endpoints: latency-svc-kztl9 [1.095942023s] May 5 00:26:52.412: INFO: Created: latency-svc-rwd9b May 5 00:26:52.517: INFO: Got endpoints: latency-svc-rwd9b [1.241043864s] May 5 00:26:52.541: INFO: Created: latency-svc-x7bx8 May 5 00:26:52.566: INFO: Got endpoints: latency-svc-x7bx8 [1.235794431s] May 5 00:26:52.602: INFO: Created: latency-svc-7xfgc May 5 00:26:52.679: INFO: Got endpoints: latency-svc-7xfgc [1.240247283s] May 5 00:26:52.711: INFO: Created: latency-svc-7tkb6 May 5 00:26:52.741: INFO: Got endpoints: latency-svc-7tkb6 [1.19417822s] May 5 00:26:52.764: INFO: Created: latency-svc-7jzxd May 5 00:26:52.847: INFO: Got endpoints: latency-svc-7jzxd [1.263554927s] May 5 00:26:52.860: INFO: Created: latency-svc-m4lpr May 5 00:26:52.885: INFO: Got endpoints: latency-svc-m4lpr [1.170010799s] May 5 00:26:52.921: INFO: Created: latency-svc-qd684 May 5 00:26:52.940: INFO: Got endpoints: latency-svc-qd684 [1.195618996s] May 5 00:26:53.009: INFO: Created: latency-svc-ffs6r May 5 00:26:53.024: INFO: Got endpoints: latency-svc-ffs6r [1.154994722s] May 5 00:26:53.152: INFO: Created: latency-svc-nmt7p May 5 00:26:53.190: INFO: Got endpoints: latency-svc-nmt7p [1.280674923s] May 5 00:26:53.190: INFO: Created: latency-svc-mjb5p May 5 00:26:53.220: INFO: Got endpoints: latency-svc-mjb5p [1.27403863s] May 5 00:26:53.308: INFO: Created: latency-svc-bgf6x May 5 00:26:53.313: INFO: Got endpoints: latency-svc-bgf6x [1.281089277s] May 5 00:26:53.334: INFO: Created: latency-svc-jt9xw May 5 00:26:53.352: INFO: Got endpoints: latency-svc-jt9xw [1.241782312s] May 5 00:26:53.475: INFO: Created: latency-svc-86487 May 5 00:26:53.481: INFO: Got endpoints: latency-svc-86487 [1.301673718s] May 5 00:26:53.508: INFO: Created: latency-svc-stdv2 May 5 00:26:53.523: INFO: Got endpoints: latency-svc-stdv2 [1.26342549s] May 5 00:26:53.574: INFO: Created: latency-svc-zrjhp May 5 00:26:53.657: INFO: Got endpoints: latency-svc-zrjhp [1.313197132s] May 5 00:26:53.707: INFO: Created: latency-svc-j8hhr May 5 00:26:53.745: INFO: Got endpoints: latency-svc-j8hhr [1.228032924s] May 5 00:26:53.811: INFO: Created: latency-svc-9nlhv May 5 00:26:53.819: INFO: Got endpoints: latency-svc-9nlhv [1.252113097s] May 5 00:26:53.850: INFO: Created: latency-svc-wvfkn May 5 00:26:53.867: INFO: Got endpoints: latency-svc-wvfkn [1.187879509s] May 5 00:26:53.886: INFO: Created: latency-svc-ksb2v May 5 00:26:53.903: INFO: Got endpoints: latency-svc-ksb2v [1.161379348s] May 5 00:26:53.961: INFO: Created: latency-svc-gpxhz May 5 00:26:53.963: INFO: Got endpoints: latency-svc-gpxhz [1.116648369s] May 5 00:26:53.993: INFO: Created: latency-svc-wnxz9 May 5 00:26:54.012: INFO: Got endpoints: latency-svc-wnxz9 [1.126627695s] May 5 00:26:54.035: INFO: Created: latency-svc-cjgwz May 5 00:26:54.048: INFO: Got endpoints: latency-svc-cjgwz [1.107928434s] May 5 00:26:54.126: INFO: Created: latency-svc-zt4sz May 5 00:26:54.168: INFO: Got endpoints: latency-svc-zt4sz [1.14352952s] May 5 00:26:54.272: INFO: Created: latency-svc-xgt9t May 5 00:26:54.275: INFO: Got endpoints: latency-svc-xgt9t [1.084828316s] May 5 00:26:54.305: INFO: Created: latency-svc-fqgc5 May 5 00:26:54.341: INFO: Got endpoints: latency-svc-fqgc5 [1.121222549s] May 5 00:26:54.416: INFO: Created: latency-svc-8lfsf May 5 00:26:54.462: INFO: Got endpoints: latency-svc-8lfsf [1.148451755s] May 5 00:26:54.462: INFO: Created: latency-svc-h8zzn May 5 00:26:54.481: INFO: Got endpoints: latency-svc-h8zzn [1.12936171s] May 5 00:26:54.582: INFO: Created: latency-svc-dzxbg May 5 00:26:54.589: INFO: Got endpoints: latency-svc-dzxbg [1.108226354s] May 5 00:26:54.618: INFO: Created: latency-svc-qxxb5 May 5 00:26:54.632: INFO: Got endpoints: latency-svc-qxxb5 [1.109429574s] May 5 00:26:54.733: INFO: Created: latency-svc-vxnzx May 5 00:26:54.761: INFO: Created: latency-svc-j5hrs May 5 00:26:54.761: INFO: Got endpoints: latency-svc-vxnzx [1.103955368s] May 5 00:26:54.785: INFO: Got endpoints: latency-svc-j5hrs [1.039602155s] May 5 00:26:54.815: INFO: Created: latency-svc-7lfbh May 5 00:26:54.889: INFO: Got endpoints: latency-svc-7lfbh [1.07019307s] May 5 00:26:54.906: INFO: Created: latency-svc-dpqs5 May 5 00:26:54.923: INFO: Got endpoints: latency-svc-dpqs5 [1.055349419s] May 5 00:26:54.942: INFO: Created: latency-svc-6jhjf May 5 00:26:54.958: INFO: Got endpoints: latency-svc-6jhjf [1.055715803s] May 5 00:26:54.982: INFO: Created: latency-svc-j9t4q May 5 00:26:55.050: INFO: Got endpoints: latency-svc-j9t4q [1.086554136s] May 5 00:26:55.053: INFO: Created: latency-svc-qg9f8 May 5 00:26:55.067: INFO: Got endpoints: latency-svc-qg9f8 [1.055321982s] May 5 00:26:55.090: INFO: Created: latency-svc-zhj42 May 5 00:26:55.103: INFO: Got endpoints: latency-svc-zhj42 [1.055775899s] May 5 00:26:55.127: INFO: Created: latency-svc-h5xg7 May 5 00:26:55.146: INFO: Got endpoints: latency-svc-h5xg7 [978.274672ms] May 5 00:26:55.224: INFO: Created: latency-svc-mn624 May 5 00:26:55.247: INFO: Got endpoints: latency-svc-mn624 [971.735778ms] May 5 00:26:55.283: INFO: Created: latency-svc-g2vv9 May 5 00:26:55.296: INFO: Got endpoints: latency-svc-g2vv9 [955.414422ms] May 5 00:26:55.368: INFO: Created: latency-svc-tb5n4 May 5 00:26:55.399: INFO: Got endpoints: latency-svc-tb5n4 [936.563167ms] May 5 00:26:55.525: INFO: Created: latency-svc-rlgwc May 5 00:26:55.527: INFO: Got endpoints: latency-svc-rlgwc [1.045719509s] May 5 00:26:55.559: INFO: Created: latency-svc-xjxrb May 5 00:26:55.573: INFO: Got endpoints: latency-svc-xjxrb [983.913572ms] May 5 00:26:55.600: INFO: Created: latency-svc-pnrv2 May 5 00:26:55.616: INFO: Got endpoints: latency-svc-pnrv2 [983.49687ms] May 5 00:26:55.673: INFO: Created: latency-svc-ggkxz May 5 00:26:55.684: INFO: Got endpoints: latency-svc-ggkxz [923.512049ms] May 5 00:26:55.732: INFO: Created: latency-svc-67fdb May 5 00:26:55.756: INFO: Got endpoints: latency-svc-67fdb [970.628452ms] May 5 00:26:55.823: INFO: Created: latency-svc-z7htj May 5 00:26:55.826: INFO: Got endpoints: latency-svc-z7htj [936.722871ms] May 5 00:26:55.853: INFO: Created: latency-svc-tnmgk May 5 00:26:55.883: INFO: Got endpoints: latency-svc-tnmgk [960.565659ms] May 5 00:26:55.967: INFO: Created: latency-svc-tnf7q May 5 00:26:55.991: INFO: Got endpoints: latency-svc-tnf7q [1.032767608s] May 5 00:26:55.992: INFO: Created: latency-svc-skj9n May 5 00:26:56.020: INFO: Got endpoints: latency-svc-skj9n [970.066668ms] May 5 00:26:56.118: INFO: Created: latency-svc-b9cjz May 5 00:26:56.154: INFO: Created: latency-svc-pjx6g May 5 00:26:56.154: INFO: Got endpoints: latency-svc-b9cjz [1.087107344s] May 5 00:26:56.183: INFO: Got endpoints: latency-svc-pjx6g [1.079487875s] May 5 00:26:56.248: INFO: Created: latency-svc-kl5bg May 5 00:26:56.302: INFO: Got endpoints: latency-svc-kl5bg [1.156345743s] May 5 00:26:56.304: INFO: Created: latency-svc-rg2h2 May 5 00:26:56.344: INFO: Got endpoints: latency-svc-rg2h2 [1.097402198s] May 5 00:26:56.423: INFO: Created: latency-svc-ph48n May 5 00:26:56.449: INFO: Got endpoints: latency-svc-ph48n [1.152945858s] May 5 00:26:56.495: INFO: Created: latency-svc-pkx9b May 5 00:26:56.519: INFO: Got endpoints: latency-svc-pkx9b [1.120630503s] May 5 00:26:56.590: INFO: Created: latency-svc-rm5pd May 5 00:26:56.606: INFO: Got endpoints: latency-svc-rm5pd [1.079149244s] May 5 00:26:56.632: INFO: Created: latency-svc-nzght May 5 00:26:56.656: INFO: Got endpoints: latency-svc-nzght [1.08303323s] May 5 00:26:56.752: INFO: Created: latency-svc-h8pdm May 5 00:26:56.762: INFO: Got endpoints: latency-svc-h8pdm [1.14684479s] May 5 00:26:56.789: INFO: Created: latency-svc-d244j May 5 00:26:56.798: INFO: Got endpoints: latency-svc-d244j [1.113880898s] May 5 00:26:56.830: INFO: Created: latency-svc-sd8wj May 5 00:26:56.848: INFO: Got endpoints: latency-svc-sd8wj [1.092464759s] May 5 00:26:56.894: INFO: Created: latency-svc-pl4bl May 5 00:26:56.898: INFO: Got endpoints: latency-svc-pl4bl [1.072341057s] May 5 00:26:56.959: INFO: Created: latency-svc-44zrc May 5 00:26:56.974: INFO: Got endpoints: latency-svc-44zrc [1.090978106s] May 5 00:26:56.993: INFO: Created: latency-svc-n726h May 5 00:26:57.051: INFO: Got endpoints: latency-svc-n726h [1.05942112s] May 5 00:26:57.058: INFO: Created: latency-svc-zs79m May 5 00:26:57.077: INFO: Got endpoints: latency-svc-zs79m [1.056989726s] May 5 00:26:57.106: INFO: Created: latency-svc-6h96j May 5 00:26:57.125: INFO: Got endpoints: latency-svc-6h96j [971.149563ms] May 5 00:26:57.212: INFO: Created: latency-svc-rvmfn May 5 00:26:57.226: INFO: Got endpoints: latency-svc-rvmfn [1.043500607s] May 5 00:26:57.262: INFO: Created: latency-svc-5jznn May 5 00:26:57.276: INFO: Got endpoints: latency-svc-5jznn [973.423436ms] May 5 00:26:57.304: INFO: Created: latency-svc-fx4sg May 5 00:26:57.374: INFO: Got endpoints: latency-svc-fx4sg [1.030111144s] May 5 00:26:57.377: INFO: Created: latency-svc-j8t7v May 5 00:26:57.385: INFO: Got endpoints: latency-svc-j8t7v [935.488677ms] May 5 00:26:57.419: INFO: Created: latency-svc-wgcpb May 5 00:26:57.433: INFO: Got endpoints: latency-svc-wgcpb [913.464515ms] May 5 00:26:57.473: INFO: Created: latency-svc-bhm9p May 5 00:26:57.511: INFO: Got endpoints: latency-svc-bhm9p [905.111616ms] May 5 00:26:57.526: INFO: Created: latency-svc-5l7zn May 5 00:26:57.548: INFO: Got endpoints: latency-svc-5l7zn [891.542879ms] May 5 00:26:57.610: INFO: Created: latency-svc-9vpmc May 5 00:26:57.671: INFO: Got endpoints: latency-svc-9vpmc [908.605326ms] May 5 00:26:57.713: INFO: Created: latency-svc-b4rzf May 5 00:26:57.731: INFO: Got endpoints: latency-svc-b4rzf [933.014938ms] May 5 00:26:57.811: INFO: Created: latency-svc-rxt87 May 5 00:26:57.820: INFO: Got endpoints: latency-svc-rxt87 [972.147827ms] May 5 00:26:57.905: INFO: Created: latency-svc-4nwxg May 5 00:26:57.954: INFO: Got endpoints: latency-svc-4nwxg [1.056240745s] May 5 00:26:57.977: INFO: Created: latency-svc-cdpqk May 5 00:26:58.006: INFO: Got endpoints: latency-svc-cdpqk [1.031215328s] May 5 00:26:58.048: INFO: Created: latency-svc-n2xzj May 5 00:26:58.092: INFO: Got endpoints: latency-svc-n2xzj [1.04098379s] May 5 00:26:58.120: INFO: Created: latency-svc-chfm4 May 5 00:26:58.158: INFO: Got endpoints: latency-svc-chfm4 [1.081151275s] May 5 00:26:58.187: INFO: Created: latency-svc-hj5h6 May 5 00:26:58.266: INFO: Got endpoints: latency-svc-hj5h6 [1.140313547s] May 5 00:26:59.315: INFO: Created: latency-svc-br2vg May 5 00:26:59.326: INFO: Got endpoints: latency-svc-br2vg [2.099504157s] May 5 00:26:59.370: INFO: Created: latency-svc-4z4ck May 5 00:26:59.396: INFO: Got endpoints: latency-svc-4z4ck [2.120087131s] May 5 00:26:59.474: INFO: Created: latency-svc-9xbx4 May 5 00:26:59.501: INFO: Got endpoints: latency-svc-9xbx4 [2.126295954s] May 5 00:26:59.522: INFO: Created: latency-svc-cnsrl May 5 00:26:59.541: INFO: Got endpoints: latency-svc-cnsrl [2.156302533s] May 5 00:26:59.612: INFO: Created: latency-svc-xp5tw May 5 00:26:59.637: INFO: Got endpoints: latency-svc-xp5tw [2.204603627s] May 5 00:26:59.757: INFO: Created: latency-svc-9p6h8 May 5 00:26:59.764: INFO: Got endpoints: latency-svc-9p6h8 [2.251956979s] May 5 00:26:59.817: INFO: Created: latency-svc-jczzd May 5 00:26:59.826: INFO: Got endpoints: latency-svc-jczzd [2.277854653s] May 5 00:26:59.852: INFO: Created: latency-svc-cdlbz May 5 00:26:59.895: INFO: Got endpoints: latency-svc-cdlbz [2.223507535s] May 5 00:26:59.911: INFO: Created: latency-svc-lr65s May 5 00:26:59.921: INFO: Got endpoints: latency-svc-lr65s [2.18933142s] May 5 00:26:59.921: INFO: Latencies: [105.435349ms 140.774018ms 219.782592ms 279.968939ms 383.119582ms 430.145404ms 459.650105ms 530.57475ms 580.047523ms 616.783533ms 688.990055ms 730.898872ms 820.843693ms 840.753333ms 856.522108ms 869.567284ms 870.533942ms 888.099618ms 891.117746ms 891.542879ms 893.954514ms 905.111616ms 905.37605ms 908.605326ms 913.464515ms 914.704642ms 923.512049ms 933.014938ms 935.488677ms 936.563167ms 936.722871ms 938.511201ms 944.753601ms 947.721671ms 952.097614ms 955.414422ms 957.294881ms 958.038874ms 960.565659ms 964.196352ms 968.906472ms 970.066668ms 970.628452ms 971.149563ms 971.735778ms 972.147827ms 973.423436ms 978.274672ms 980.615234ms 982.895923ms 983.49687ms 983.913572ms 990.376759ms 996.786158ms 997.916385ms 1.005324026s 1.005458593s 1.009090437s 1.023686774s 1.023802387s 1.030111144s 1.031215328s 1.032767608s 1.036289209s 1.039602155s 1.04098379s 1.043500607s 1.043947853s 1.044120127s 1.045719509s 1.046304862s 1.048531729s 1.051755682s 1.055321982s 1.055349419s 1.055715803s 1.055775899s 1.056240745s 1.056989726s 1.05942112s 1.060123399s 1.060518364s 1.064947972s 1.067201367s 1.0687765s 1.07019307s 1.070902277s 1.071348735s 1.072341057s 1.078843376s 1.079149244s 1.079487875s 1.081151275s 1.08303323s 1.084828316s 1.086447637s 1.086554136s 1.086773917s 1.087107344s 1.090121091s 1.090978106s 1.092464759s 1.093401523s 1.095942023s 1.096077793s 1.097402198s 1.10045679s 1.10365239s 1.103955368s 1.10438538s 1.104631162s 1.107672748s 1.107928434s 1.108226354s 1.109429574s 1.113236381s 1.113880898s 1.116648369s 1.11943123s 1.120630503s 1.121222549s 1.121911722s 1.122630919s 1.123401429s 1.124531674s 1.126627695s 1.12936171s 1.129366701s 1.135780315s 1.140313547s 1.14352952s 1.145150211s 1.14684479s 1.148451755s 1.149727229s 1.151931411s 1.152945858s 1.154994722s 1.156345743s 1.157812396s 1.159940064s 1.161300485s 1.161379348s 1.168355834s 1.170010799s 1.185810184s 1.187879509s 1.19092218s 1.194090841s 1.19417822s 1.195618996s 1.195742572s 1.209880619s 1.220189729s 1.223520603s 1.228032924s 1.235794431s 1.240247283s 1.241043864s 1.241342762s 1.241782312s 1.252113097s 1.26342549s 1.263554927s 1.269836078s 1.27403863s 1.274253549s 1.280674923s 1.281089277s 1.281331443s 1.301673718s 1.306175824s 1.313197132s 1.31784099s 1.340681031s 1.376336868s 1.389226783s 1.389911952s 1.390276588s 1.396615295s 1.402275451s 1.419387519s 1.430995279s 1.463394014s 1.468188301s 1.481150762s 1.493405133s 1.498767086s 1.509830187s 1.524087464s 1.533525193s 2.099504157s 2.120087131s 2.126295954s 2.156302533s 2.18933142s 2.204603627s 2.223507535s 2.251956979s 2.277854653s] May 5 00:26:59.921: INFO: 50 %ile: 1.090978106s May 5 00:26:59.921: INFO: 90 %ile: 1.402275451s May 5 00:26:59.921: INFO: 99 %ile: 2.251956979s May 5 00:26:59.921: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:26:59.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8283" for this suite. • [SLOW TEST:20.182 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":149,"skipped":2482,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:26:59.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:27:00.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-858" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":150,"skipped":2486,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:27:00.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 5 00:27:00.286: INFO: Pod name pod-release: Found 0 pods out of 1 May 5 00:27:05.288: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:27:05.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4709" for this suite. • [SLOW TEST:5.254 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":151,"skipped":2495,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:27:05.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-30c03e2b-4531-4697-9d91-82c90cd2526d STEP: Creating a pod to test consume secrets May 5 00:27:05.961: INFO: Waiting up to 5m0s for pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289" in namespace "secrets-4933" to be "Succeeded or Failed" May 5 00:27:05.999: INFO: Pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289": Phase="Pending", Reason="", readiness=false. Elapsed: 37.172034ms May 5 00:27:08.008: INFO: Pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046756139s May 5 00:27:10.020: INFO: Pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058930854s May 5 00:27:12.042: INFO: Pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080467388s STEP: Saw pod success May 5 00:27:12.042: INFO: Pod "pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289" satisfied condition "Succeeded or Failed" May 5 00:27:12.048: INFO: Trying to get logs from node latest-worker pod pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289 container secret-volume-test: STEP: delete the pod May 5 00:27:12.458: INFO: Waiting for pod pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289 to disappear May 5 00:27:12.467: INFO: Pod pod-secrets-557c1c32-3151-4909-b824-ebc3e8893289 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:27:12.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4933" for this suite. • [SLOW TEST:7.034 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:27:12.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 5 00:27:12.737: INFO: Waiting up to 1m0s for all nodes to be ready May 5 00:28:12.762: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:28:12.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 5 00:28:16.877: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:28:33.322: INFO: pods created so far: [1 1 1] May 5 00:28:33.322: INFO: length of pods created so far: 3 May 5 00:28:49.398: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:28:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8624" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:28:56.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1425" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:104.135 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":153,"skipped":2529,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:28:56.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c STEP: updating the pod May 5 00:29:05.236: INFO: Successfully updated pod "var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c" STEP: waiting for pod and container restart STEP: Failing liveness probe May 5 00:29:05.297: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-6674 PodName:var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:29:05.297: INFO: >>> kubeConfig: /root/.kube/config I0505 00:29:05.330371 7 log.go:172] (0xc002b79600) (0xc000ac3c20) Create stream I0505 00:29:05.330405 7 log.go:172] (0xc002b79600) (0xc000ac3c20) Stream added, broadcasting: 1 I0505 00:29:05.332850 7 log.go:172] (0xc002b79600) Reply frame received for 1 I0505 00:29:05.332888 7 log.go:172] (0xc002b79600) (0xc0014a6d20) Create stream I0505 00:29:05.332902 7 log.go:172] (0xc002b79600) (0xc0014a6d20) Stream added, broadcasting: 3 I0505 00:29:05.334184 7 log.go:172] (0xc002b79600) Reply frame received for 3 I0505 00:29:05.334232 7 log.go:172] (0xc002b79600) (0xc0014a6dc0) Create stream I0505 00:29:05.334251 7 log.go:172] (0xc002b79600) (0xc0014a6dc0) Stream added, broadcasting: 5 I0505 00:29:05.335341 7 log.go:172] (0xc002b79600) Reply frame received for 5 I0505 00:29:05.401953 7 log.go:172] (0xc002b79600) Data frame received for 5 I0505 00:29:05.401998 7 log.go:172] (0xc0014a6dc0) (5) Data frame handling I0505 00:29:05.402028 7 log.go:172] (0xc002b79600) Data frame received for 3 I0505 00:29:05.402043 7 log.go:172] (0xc0014a6d20) (3) Data frame handling I0505 00:29:05.403593 7 log.go:172] (0xc002b79600) Data frame received for 1 I0505 00:29:05.403616 7 log.go:172] (0xc000ac3c20) (1) Data frame handling I0505 00:29:05.403628 7 log.go:172] (0xc000ac3c20) (1) Data frame sent I0505 00:29:05.403644 7 log.go:172] (0xc002b79600) (0xc000ac3c20) Stream removed, broadcasting: 1 I0505 00:29:05.403694 7 log.go:172] (0xc002b79600) Go away received I0505 00:29:05.403769 7 log.go:172] (0xc002b79600) (0xc000ac3c20) Stream removed, broadcasting: 1 I0505 00:29:05.403782 7 log.go:172] (0xc002b79600) (0xc0014a6d20) Stream removed, broadcasting: 3 I0505 00:29:05.404085 7 log.go:172] (0xc002b79600) (0xc0014a6dc0) Stream removed, broadcasting: 5 May 5 00:29:05.404: INFO: Pod exec output: / STEP: Waiting for container to restart May 5 00:29:05.450: INFO: Container dapi-container, restarts: 0 May 5 00:29:15.454: INFO: Container dapi-container, restarts: 0 May 5 00:29:25.454: INFO: Container dapi-container, restarts: 0 May 5 00:29:35.455: INFO: Container dapi-container, restarts: 0 May 5 00:29:45.455: INFO: Container dapi-container, restarts: 1 May 5 00:29:45.455: INFO: Container has restart count: 1 STEP: Rewriting the file May 5 00:29:45.459: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-6674 PodName:var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:29:45.459: INFO: >>> kubeConfig: /root/.kube/config I0505 00:29:45.487474 7 log.go:172] (0xc001f7a370) (0xc001333d60) Create stream I0505 00:29:45.487509 7 log.go:172] (0xc001f7a370) (0xc001333d60) Stream added, broadcasting: 1 I0505 00:29:45.489534 7 log.go:172] (0xc001f7a370) Reply frame received for 1 I0505 00:29:45.489579 7 log.go:172] (0xc001f7a370) (0xc000270d20) Create stream I0505 00:29:45.489595 7 log.go:172] (0xc001f7a370) (0xc000270d20) Stream added, broadcasting: 3 I0505 00:29:45.490360 7 log.go:172] (0xc001f7a370) Reply frame received for 3 I0505 00:29:45.490390 7 log.go:172] (0xc001f7a370) (0xc0029c0320) Create stream I0505 00:29:45.490399 7 log.go:172] (0xc001f7a370) (0xc0029c0320) Stream added, broadcasting: 5 I0505 00:29:45.491186 7 log.go:172] (0xc001f7a370) Reply frame received for 5 I0505 00:29:45.565600 7 log.go:172] (0xc001f7a370) Data frame received for 3 I0505 00:29:45.565658 7 log.go:172] (0xc001f7a370) Data frame received for 5 I0505 00:29:45.565720 7 log.go:172] (0xc0029c0320) (5) Data frame handling I0505 00:29:45.565764 7 log.go:172] (0xc000270d20) (3) Data frame handling I0505 00:29:45.567419 7 log.go:172] (0xc001f7a370) Data frame received for 1 I0505 00:29:45.567440 7 log.go:172] (0xc001333d60) (1) Data frame handling I0505 00:29:45.567450 7 log.go:172] (0xc001333d60) (1) Data frame sent I0505 00:29:45.567460 7 log.go:172] (0xc001f7a370) (0xc001333d60) Stream removed, broadcasting: 1 I0505 00:29:45.567536 7 log.go:172] (0xc001f7a370) (0xc001333d60) Stream removed, broadcasting: 1 I0505 00:29:45.567551 7 log.go:172] (0xc001f7a370) (0xc000270d20) Stream removed, broadcasting: 3 I0505 00:29:45.567564 7 log.go:172] (0xc001f7a370) (0xc0029c0320) Stream removed, broadcasting: 5 May 5 00:29:45.567: INFO: Pod exec output: I0505 00:29:45.567587 7 log.go:172] (0xc001f7a370) Go away received STEP: Waiting for container to stop restarting May 5 00:30:13.575: INFO: Container has restart count: 2 May 5 00:31:15.575: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 5 00:31:15.579: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-6674 PodName:var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:31:15.579: INFO: >>> kubeConfig: /root/.kube/config I0505 00:31:15.606804 7 log.go:172] (0xc002b79290) (0xc0012f6e60) Create stream I0505 00:31:15.606838 7 log.go:172] (0xc002b79290) (0xc0012f6e60) Stream added, broadcasting: 1 I0505 00:31:15.608542 7 log.go:172] (0xc002b79290) Reply frame received for 1 I0505 00:31:15.608576 7 log.go:172] (0xc002b79290) (0xc0012f0460) Create stream I0505 00:31:15.608588 7 log.go:172] (0xc002b79290) (0xc0012f0460) Stream added, broadcasting: 3 I0505 00:31:15.610002 7 log.go:172] (0xc002b79290) Reply frame received for 3 I0505 00:31:15.610046 7 log.go:172] (0xc002b79290) (0xc001b9c000) Create stream I0505 00:31:15.610061 7 log.go:172] (0xc002b79290) (0xc001b9c000) Stream added, broadcasting: 5 I0505 00:31:15.611161 7 log.go:172] (0xc002b79290) Reply frame received for 5 I0505 00:31:15.689992 7 log.go:172] (0xc002b79290) Data frame received for 5 I0505 00:31:15.690045 7 log.go:172] (0xc001b9c000) (5) Data frame handling I0505 00:31:15.690074 7 log.go:172] (0xc002b79290) Data frame received for 3 I0505 00:31:15.690085 7 log.go:172] (0xc0012f0460) (3) Data frame handling I0505 00:31:15.691657 7 log.go:172] (0xc002b79290) Data frame received for 1 I0505 00:31:15.691699 7 log.go:172] (0xc0012f6e60) (1) Data frame handling I0505 00:31:15.691718 7 log.go:172] (0xc0012f6e60) (1) Data frame sent I0505 00:31:15.691734 7 log.go:172] (0xc002b79290) (0xc0012f6e60) Stream removed, broadcasting: 1 I0505 00:31:15.691816 7 log.go:172] (0xc002b79290) (0xc0012f6e60) Stream removed, broadcasting: 1 I0505 00:31:15.691835 7 log.go:172] (0xc002b79290) (0xc0012f0460) Stream removed, broadcasting: 3 I0505 00:31:15.691849 7 log.go:172] (0xc002b79290) (0xc001b9c000) Stream removed, broadcasting: 5 I0505 00:31:15.692173 7 log.go:172] (0xc002b79290) Go away received May 5 00:31:15.696: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-6674 PodName:var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:31:15.696: INFO: >>> kubeConfig: /root/.kube/config I0505 00:31:15.725551 7 log.go:172] (0xc002a9c210) (0xc0014ed0e0) Create stream I0505 00:31:15.725579 7 log.go:172] (0xc002a9c210) (0xc0014ed0e0) Stream added, broadcasting: 1 I0505 00:31:15.727681 7 log.go:172] (0xc002a9c210) Reply frame received for 1 I0505 00:31:15.727720 7 log.go:172] (0xc002a9c210) (0xc0014ed220) Create stream I0505 00:31:15.727731 7 log.go:172] (0xc002a9c210) (0xc0014ed220) Stream added, broadcasting: 3 I0505 00:31:15.728623 7 log.go:172] (0xc002a9c210) Reply frame received for 3 I0505 00:31:15.728657 7 log.go:172] (0xc002a9c210) (0xc001b9c140) Create stream I0505 00:31:15.728667 7 log.go:172] (0xc002a9c210) (0xc001b9c140) Stream added, broadcasting: 5 I0505 00:31:15.729746 7 log.go:172] (0xc002a9c210) Reply frame received for 5 I0505 00:31:15.800955 7 log.go:172] (0xc002a9c210) Data frame received for 3 I0505 00:31:15.800984 7 log.go:172] (0xc0014ed220) (3) Data frame handling I0505 00:31:15.801245 7 log.go:172] (0xc002a9c210) Data frame received for 5 I0505 00:31:15.801265 7 log.go:172] (0xc001b9c140) (5) Data frame handling I0505 00:31:15.802896 7 log.go:172] (0xc002a9c210) Data frame received for 1 I0505 00:31:15.802992 7 log.go:172] (0xc0014ed0e0) (1) Data frame handling I0505 00:31:15.803028 7 log.go:172] (0xc0014ed0e0) (1) Data frame sent I0505 00:31:15.803046 7 log.go:172] (0xc002a9c210) (0xc0014ed0e0) Stream removed, broadcasting: 1 I0505 00:31:15.803101 7 log.go:172] (0xc002a9c210) Go away received I0505 00:31:15.803249 7 log.go:172] (0xc002a9c210) (0xc0014ed0e0) Stream removed, broadcasting: 1 I0505 00:31:15.803284 7 log.go:172] (0xc002a9c210) (0xc0014ed220) Stream removed, broadcasting: 3 I0505 00:31:15.803314 7 log.go:172] (0xc002a9c210) (0xc001b9c140) Stream removed, broadcasting: 5 May 5 00:31:15.803: INFO: Deleting pod "var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c" in namespace "var-expansion-6674" May 5 00:31:15.809: INFO: Wait up to 5m0s for pod "var-expansion-5decc28a-e747-4cd9-af47-d7002c6c066c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:31:55.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6674" for this suite. • [SLOW TEST:179.198 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":154,"skipped":2530,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:31:55.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:12.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9333" for this suite. • [SLOW TEST:16.404 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":155,"skipped":2539,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:12.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-edd8c78f-0eff-4d41-a175-0c591407f50f STEP: Creating a pod to test consume secrets May 5 00:32:12.345: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308" in namespace "projected-3891" to be "Succeeded or Failed" May 5 00:32:12.466: INFO: Pod "pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308": Phase="Pending", Reason="", readiness=false. Elapsed: 120.483014ms May 5 00:32:14.469: INFO: Pod "pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12428774s May 5 00:32:16.474: INFO: Pod "pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128748395s STEP: Saw pod success May 5 00:32:16.474: INFO: Pod "pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308" satisfied condition "Succeeded or Failed" May 5 00:32:16.477: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308 container secret-volume-test: STEP: delete the pod May 5 00:32:16.611: INFO: Waiting for pod pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308 to disappear May 5 00:32:16.626: INFO: Pod pod-projected-secrets-0affe9d8-a163-4ca3-ab8c-84eecb415308 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:16.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3891" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:16.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2995" for this suite. • [SLOW TEST:5.250 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":157,"skipped":2577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:21.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:32:21.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365" in namespace "projected-501" to be "Succeeded or Failed" May 5 00:32:22.004: INFO: Pod "downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365": Phase="Pending", Reason="", readiness=false. Elapsed: 5.167594ms May 5 00:32:24.011: INFO: Pod "downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011788697s May 5 00:32:26.015: INFO: Pod "downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016090783s STEP: Saw pod success May 5 00:32:26.015: INFO: Pod "downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365" satisfied condition "Succeeded or Failed" May 5 00:32:26.018: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365 container client-container: STEP: delete the pod May 5 00:32:26.053: INFO: Waiting for pod downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365 to disappear May 5 00:32:26.076: INFO: Pod downwardapi-volume-7f4e2456-df2e-4fed-97ba-db89be90f365 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:26.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-501" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2633,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:26.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:30.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7191" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:30.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a1713d5d-828c-4536-bd55-a2c8f5dcccc3 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:36.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2522" for this suite. • [SLOW TEST:6.202 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:36.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:32:36.508: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 5 00:32:41.510: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 00:32:41.510: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 5 00:32:41.549: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2914 /apis/apps/v1/namespaces/deployment-2914/deployments/test-cleanup-deployment b26a7051-7c56-41f5-99c4-c64b1b7b365d 1530085 1 2020-05-05 00:32:41 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-05 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c00608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 5 00:32:41.623: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-2914 /apis/apps/v1/namespaces/deployment-2914/replicasets/test-cleanup-deployment-6688745694 bf56fed5-2061-496b-8600-f12a9a4ede8f 1530087 1 2020-05-05 00:32:41 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b26a7051-7c56-41f5-99c4-c64b1b7b365d 0xc003912b97 0xc003912b98}] [] [{kube-controller-manager Update apps/v1 2020-05-05 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b26a7051-7c56-41f5-99c4-c64b1b7b365d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003912c28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 00:32:41.623: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 5 00:32:41.623: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2914 /apis/apps/v1/namespaces/deployment-2914/replicasets/test-cleanup-controller cdc3bc4a-62db-40c6-8229-095c96dacb65 1530086 1 2020-05-05 00:32:36 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b26a7051-7c56-41f5-99c4-c64b1b7b365d 0xc003912a87 0xc003912a88}] [] [{e2e.test Update apps/v1 2020-05-05 00:32:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-05 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"b26a7051-7c56-41f5-99c4-c64b1b7b365d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003912b28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 00:32:41.982: INFO: Pod "test-cleanup-controller-9nkhg" is available: &Pod{ObjectMeta:{test-cleanup-controller-9nkhg test-cleanup-controller- deployment-2914 /api/v1/namespaces/deployment-2914/pods/test-cleanup-controller-9nkhg a034de9b-ea4b-4ea9-9a73-24062d20ff8d 1530073 0 2020-05-05 00:32:36 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller cdc3bc4a-62db-40c6-8229-095c96dacb65 0xc002c00ae7 0xc002c00ae8}] [] [{kube-controller-manager Update v1 2020-05-05 00:32:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdc3bc4a-62db-40c6-8229-095c96dacb65\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 00:32:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nwp8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nwp8j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nwp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:32:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:32:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:32:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.22,StartTime:2020-05-05 00:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 00:32:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://837de10c538f0c733be684cbd2b25c283fcd397ca379377bd12e4a7661d35eec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 00:32:41.982: INFO: Pod "test-cleanup-deployment-6688745694-zqbdz" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-zqbdz test-cleanup-deployment-6688745694- deployment-2914 /api/v1/namespaces/deployment-2914/pods/test-cleanup-deployment-6688745694-zqbdz 7ee1cae4-bcc7-450d-952e-88ba7f8713d1 1530091 0 2020-05-05 00:32:41 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 bf56fed5-2061-496b-8600-f12a9a4ede8f 0xc002c00ca7 0xc002c00ca8}] [] [{kube-controller-manager Update v1 2020-05-05 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf56fed5-2061-496b-8600-f12a9a4ede8f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nwp8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nwp8j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nwp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:32:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:41.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2914" for this suite. • [SLOW TEST:5.697 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":161,"skipped":2726,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:42.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 5 00:32:46.913: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8711 pod-service-account-ea9c310d-fb55-4e1d-a11b-55d0aa04f362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 5 00:32:50.562: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8711 pod-service-account-ea9c310d-fb55-4e1d-a11b-55d0aa04f362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 5 00:32:50.768: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8711 pod-service-account-ea9c310d-fb55-4e1d-a11b-55d0aa04f362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8711" for this suite. • [SLOW TEST:8.939 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":162,"skipped":2734,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:51.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:32:51.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9571" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2736,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:32:51.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6424.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:32:57.348: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.351: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.354: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.356: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.371: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:32:57.378: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:02.384: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.388: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.391: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.395: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.406: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.409: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.412: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.415: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:02.422: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:07.385: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:07.388: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:07.409: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.336: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.374: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.378: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.381: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.384: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:08.389: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:12.455: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.459: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.461: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.464: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.473: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.476: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.479: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.482: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:12.488: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:17.382: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.386: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.389: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.392: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.401: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.404: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.407: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.410: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:17.416: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:22.383: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.387: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.391: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.394: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.404: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.408: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.411: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.414: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local from pod dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb: the server could not find the requested resource (get pods dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb) May 5 00:33:22.420: INFO: Lookups using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6424.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6424.svc.cluster.local jessie_udp@dns-test-service-2.dns-6424.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6424.svc.cluster.local] May 5 00:33:27.416: INFO: DNS probes using dns-6424/dns-test-9f8c3a28-96f0-480c-ae81-a6c22f73d4eb succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:33:27.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6424" for this suite. • [SLOW TEST:37.111 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":164,"skipped":2747,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:33:28.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 5 00:33:28.439: INFO: Waiting up to 5m0s for pod "client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e" in namespace "containers-6929" to be "Succeeded or Failed" May 5 00:33:28.442: INFO: Pod "client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043196ms May 5 00:33:30.532: INFO: Pod "client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093171739s May 5 00:33:32.537: INFO: Pod "client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097790068s STEP: Saw pod success May 5 00:33:32.537: INFO: Pod "client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e" satisfied condition "Succeeded or Failed" May 5 00:33:32.540: INFO: Trying to get logs from node latest-worker pod client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e container test-container: STEP: delete the pod May 5 00:33:32.606: INFO: Waiting for pod client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e to disappear May 5 00:33:32.611: INFO: Pod client-containers-5ed843f3-5bc2-4408-ba39-bd78381c868e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:33:32.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6929" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2751,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:33:32.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:33:32.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804" in namespace "projected-6402" to be "Succeeded or Failed" May 5 00:33:32.687: INFO: Pod "downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804": Phase="Pending", Reason="", readiness=false. Elapsed: 13.785043ms May 5 00:33:34.691: INFO: Pod "downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017563096s May 5 00:33:36.712: INFO: Pod "downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039317439s STEP: Saw pod success May 5 00:33:36.712: INFO: Pod "downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804" satisfied condition "Succeeded or Failed" May 5 00:33:36.716: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804 container client-container: STEP: delete the pod May 5 00:33:36.750: INFO: Waiting for pod downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804 to disappear May 5 00:33:36.755: INFO: Pod downwardapi-volume-62e0c9dd-5ccb-441e-bd65-82efccde6804 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:33:36.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6402" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2767,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:33:36.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-5a217a03-3f76-4019-89a3-0486736f26e2 STEP: Creating a pod to test consume secrets May 5 00:33:36.889: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9" in namespace "projected-9712" to be "Succeeded or Failed" May 5 00:33:36.900: INFO: Pod "pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.034218ms May 5 00:33:38.904: INFO: Pod "pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015064521s May 5 00:33:40.908: INFO: Pod "pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018897627s STEP: Saw pod success May 5 00:33:40.908: INFO: Pod "pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9" satisfied condition "Succeeded or Failed" May 5 00:33:40.911: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9 container projected-secret-volume-test: STEP: delete the pod May 5 00:33:41.007: INFO: Waiting for pod pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9 to disappear May 5 00:33:41.018: INFO: Pod pod-projected-secrets-ec7b1e78-b0ee-4d38-b277-c43a64f157a9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:33:41.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9712" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:33:41.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:33:41.125: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 5 00:33:46.143: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 00:33:46.143: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 5 00:33:48.148: INFO: Creating deployment "test-rollover-deployment" May 5 00:33:48.163: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 5 00:33:50.169: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 5 00:33:50.175: INFO: Ensure that both replica sets have 1 created replica May 5 00:33:50.180: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 5 00:33:50.187: INFO: Updating deployment test-rollover-deployment May 5 00:33:50.187: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 5 00:33:52.198: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 5 00:33:52.206: INFO: Make sure deployment "test-rollover-deployment" is complete May 5 00:33:52.211: INFO: all replica sets need to contain the pod-template-hash label May 5 00:33:52.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235630, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:33:54.221: INFO: all replica sets need to contain the pod-template-hash label May 5 00:33:54.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235633, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:33:56.220: INFO: all replica sets need to contain the pod-template-hash label May 5 00:33:56.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235633, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:33:58.220: INFO: all replica sets need to contain the pod-template-hash label May 5 00:33:58.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235633, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:34:00.220: INFO: all replica sets need to contain the pod-template-hash label May 5 00:34:00.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235633, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:34:02.220: INFO: all replica sets need to contain the pod-template-hash label May 5 00:34:02.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235633, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235628, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:34:04.220: INFO: May 5 00:34:04.220: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 5 00:34:04.228: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3695 /apis/apps/v1/namespaces/deployment-3695/deployments/test-rollover-deployment b95f57f9-5dba-4ccf-86d3-3aca07c316d6 1530629 2 2020-05-05 00:33:48 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-05 00:33:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-05 00:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036fad38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-05 00:33:48 +0000 UTC,LastTransitionTime:2020-05-05 00:33:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-05 00:34:04 +0000 UTC,LastTransitionTime:2020-05-05 00:33:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 5 00:34:04.231: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-3695 /apis/apps/v1/namespaces/deployment-3695/replicasets/test-rollover-deployment-7c4fd9c879 968cc271-f89d-4bd8-a001-d9f6c2f1a5f5 1530618 2 2020-05-05 00:33:50 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b95f57f9-5dba-4ccf-86d3-3aca07c316d6 0xc0036fb387 0xc0036fb388}] [] [{kube-controller-manager Update apps/v1 2020-05-05 00:34:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b95f57f9-5dba-4ccf-86d3-3aca07c316d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036fb418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 00:34:04.231: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 5 00:34:04.231: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3695 /apis/apps/v1/namespaces/deployment-3695/replicasets/test-rollover-controller ed75c142-0942-4068-9c1f-6c3d22a51d1e 1530628 2 2020-05-05 00:33:41 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b95f57f9-5dba-4ccf-86d3-3aca07c316d6 0xc0036fb157 0xc0036fb158}] [] [{e2e.test Update apps/v1 2020-05-05 00:33:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-05 00:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b95f57f9-5dba-4ccf-86d3-3aca07c316d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0036fb1f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 00:34:04.232: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-3695 /apis/apps/v1/namespaces/deployment-3695/replicasets/test-rollover-deployment-5686c4cfd5 4be3ef35-922a-435b-bbc3-95179dac34d4 1530569 2 2020-05-05 00:33:48 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b95f57f9-5dba-4ccf-86d3-3aca07c316d6 0xc0036fb287 0xc0036fb288}] [] [{kube-controller-manager Update apps/v1 2020-05-05 00:33:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b95f57f9-5dba-4ccf-86d3-3aca07c316d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036fb318 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 00:34:04.235: INFO: Pod "test-rollover-deployment-7c4fd9c879-44rf5" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-44rf5 test-rollover-deployment-7c4fd9c879- deployment-3695 /api/v1/namespaces/deployment-3695/pods/test-rollover-deployment-7c4fd9c879-44rf5 db466250-73c2-4c8c-a6cf-91bd0a3eba03 1530586 0 2020-05-05 00:33:50 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 968cc271-f89d-4bd8-a001-d9f6c2f1a5f5 0xc0036fb9d7 0xc0036fb9d8}] [] [{kube-controller-manager Update v1 2020-05-05 00:33:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"968cc271-f89d-4bd8-a001-d9f6c2f1a5f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 00:33:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gm5l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gm5l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gm5l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:33:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 00:33:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.121,StartTime:2020-05-05 00:33:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 00:33:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://ab1736d48a82fa4222a2a267bbf6455f102021103fb30fcf4c0d99203f624116,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:34:04.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3695" for this suite. • [SLOW TEST:23.216 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":168,"skipped":2809,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:34:04.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:35:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3868" for this suite. • [SLOW TEST:60.482 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2829,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:35:04.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 5 00:35:04.865: INFO: Waiting up to 5m0s for pod "pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3" in namespace "emptydir-7883" to be "Succeeded or Failed" May 5 00:35:04.871: INFO: Pod "pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.711223ms May 5 00:35:06.875: INFO: Pod "pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009992304s May 5 00:35:08.880: INFO: Pod "pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014866726s STEP: Saw pod success May 5 00:35:08.880: INFO: Pod "pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3" satisfied condition "Succeeded or Failed" May 5 00:35:08.883: INFO: Trying to get logs from node latest-worker pod pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3 container test-container: STEP: delete the pod May 5 00:35:08.925: INFO: Waiting for pod pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3 to disappear May 5 00:35:08.931: INFO: Pod pod-e7b22684-1a82-45dd-a1df-65dc84c1a8a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:35:08.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7883" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2831,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:35:08.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:35:09.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a" in namespace "projected-8535" to be "Succeeded or Failed" May 5 00:35:09.157: INFO: Pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.096705ms May 5 00:35:11.190: INFO: Pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078432455s May 5 00:35:13.198: INFO: Pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a": Phase="Running", Reason="", readiness=true. Elapsed: 4.086902088s May 5 00:35:15.202: INFO: Pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09091514s STEP: Saw pod success May 5 00:35:15.202: INFO: Pod "downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a" satisfied condition "Succeeded or Failed" May 5 00:35:15.205: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a container client-container: STEP: delete the pod May 5 00:35:15.360: INFO: Waiting for pod downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a to disappear May 5 00:35:15.382: INFO: Pod downwardapi-volume-2d00d0f1-435b-4d94-b985-5928f487645a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:35:15.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8535" for this suite. • [SLOW TEST:6.454 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:35:15.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:35:15.925: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:35:18.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235715, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235715, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235715, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235715, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:35:21.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:35:21.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2097" for this suite. STEP: Destroying namespace "webhook-2097-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.577 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":172,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:35:21.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-963 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-963;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-963 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-963;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-963.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-963.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-963.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-963.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-963.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-963.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-963.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-963.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.10.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.10.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.10.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.10.5_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-963 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-963;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-963 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-963;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-963.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-963.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-963.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-963.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-963.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-963.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-963.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-963.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-963.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-963.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.10.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.10.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.10.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.10.5_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:35:28.331: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.334: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.337: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.340: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.346: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.349: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.353: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.373: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.376: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.380: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.383: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.387: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.390: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.394: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.400: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:28.421: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:33.426: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.430: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.445: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.448: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.470: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.473: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.476: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.482: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.488: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:33.537: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:38.425: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.428: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.434: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.464: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.467: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.474: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.479: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.482: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.488: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:38.504: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:43.426: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.429: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.432: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.435: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.438: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.444: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.447: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.472: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.475: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.478: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.481: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.483: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.490: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:43.508: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:48.426: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.429: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.446: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.506: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.510: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.513: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.516: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.519: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.523: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.527: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.530: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:48.550: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:53.426: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.429: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.432: INFO: Unable to read wheezy_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.435: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.446: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.470: INFO: Unable to read jessie_udp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.473: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.476: INFO: Unable to read jessie_udp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-963 from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.482: INFO: Unable to read jessie_udp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.485: INFO: Unable to read jessie_tcp@dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.488: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.491: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-963.svc from pod dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408: the server could not find the requested resource (get pods dns-test-f7687919-4132-4b66-b1b0-f6cba4083408) May 5 00:35:53.510: INFO: Lookups using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-963 wheezy_tcp@dns-test-service.dns-963 wheezy_udp@dns-test-service.dns-963.svc wheezy_tcp@dns-test-service.dns-963.svc wheezy_udp@_http._tcp.dns-test-service.dns-963.svc wheezy_tcp@_http._tcp.dns-test-service.dns-963.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-963 jessie_tcp@dns-test-service.dns-963 jessie_udp@dns-test-service.dns-963.svc jessie_tcp@dns-test-service.dns-963.svc jessie_udp@_http._tcp.dns-test-service.dns-963.svc jessie_tcp@_http._tcp.dns-test-service.dns-963.svc] May 5 00:35:58.573: INFO: DNS probes using dns-963/dns-test-f7687919-4132-4b66-b1b0-f6cba4083408 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:35:59.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-963" for this suite. • [SLOW TEST:37.273 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":173,"skipped":2968,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:35:59.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 5 00:35:59.331: INFO: Waiting up to 5m0s for pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec" in namespace "downward-api-5786" to be "Succeeded or Failed" May 5 00:35:59.347: INFO: Pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.174947ms May 5 00:36:01.511: INFO: Pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179747718s May 5 00:36:03.515: INFO: Pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec": Phase="Running", Reason="", readiness=true. Elapsed: 4.184575994s May 5 00:36:05.521: INFO: Pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189707223s STEP: Saw pod success May 5 00:36:05.521: INFO: Pod "downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec" satisfied condition "Succeeded or Failed" May 5 00:36:05.524: INFO: Trying to get logs from node latest-worker2 pod downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec container dapi-container: STEP: delete the pod May 5 00:36:05.560: INFO: Waiting for pod downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec to disappear May 5 00:36:05.594: INFO: Pod downward-api-dab85ffa-89e7-40b1-acbe-78e5691b7bec no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:36:05.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5786" for this suite. • [SLOW TEST:6.360 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2981,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:36:05.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:36:06.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:36:08.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:36:10.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235766, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:36:13.181: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:36:13.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2437-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:36:14.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1844" for this suite. STEP: Destroying namespace "webhook-1844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.849 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":175,"skipped":2990,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:36:14.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:36:14.560: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7bd3c639-0686-4bcd-ab58-bc07e43a8a28" in namespace "security-context-test-9921" to be "Succeeded or Failed" May 5 00:36:14.564: INFO: Pod "busybox-readonly-false-7bd3c639-0686-4bcd-ab58-bc07e43a8a28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.93296ms May 5 00:36:16.568: INFO: Pod "busybox-readonly-false-7bd3c639-0686-4bcd-ab58-bc07e43a8a28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008191455s May 5 00:36:18.612: INFO: Pod "busybox-readonly-false-7bd3c639-0686-4bcd-ab58-bc07e43a8a28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0520919s May 5 00:36:18.612: INFO: Pod "busybox-readonly-false-7bd3c639-0686-4bcd-ab58-bc07e43a8a28" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:36:18.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9921" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2996,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:36:18.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 5 00:36:25.260: INFO: Successfully updated pod "adopt-release-9bsmf" STEP: Checking that the Job readopts the Pod May 5 00:36:25.260: INFO: Waiting up to 15m0s for pod "adopt-release-9bsmf" in namespace "job-1064" to be "adopted" May 5 00:36:25.284: INFO: Pod "adopt-release-9bsmf": Phase="Running", Reason="", readiness=true. Elapsed: 23.622963ms May 5 00:36:27.289: INFO: Pod "adopt-release-9bsmf": Phase="Running", Reason="", readiness=true. Elapsed: 2.028657619s May 5 00:36:27.289: INFO: Pod "adopt-release-9bsmf" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 5 00:36:27.800: INFO: Successfully updated pod "adopt-release-9bsmf" STEP: Checking that the Job releases the Pod May 5 00:36:27.800: INFO: Waiting up to 15m0s for pod "adopt-release-9bsmf" in namespace "job-1064" to be "released" May 5 00:36:27.846: INFO: Pod "adopt-release-9bsmf": Phase="Running", Reason="", readiness=true. Elapsed: 46.039356ms May 5 00:36:29.850: INFO: Pod "adopt-release-9bsmf": Phase="Running", Reason="", readiness=true. Elapsed: 2.049939684s May 5 00:36:29.850: INFO: Pod "adopt-release-9bsmf" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:36:29.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1064" for this suite. • [SLOW TEST:11.238 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":177,"skipped":3001,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:36:29.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 5 00:36:34.123: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8179 PodName:var-expansion-a3ddd197-45b1-4e5e-aa19-9f01d3c5dca6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:36:34.123: INFO: >>> kubeConfig: /root/.kube/config I0505 00:36:34.158304 7 log.go:172] (0xc002a9cd10) (0xc0029c01e0) Create stream I0505 00:36:34.158332 7 log.go:172] (0xc002a9cd10) (0xc0029c01e0) Stream added, broadcasting: 1 I0505 00:36:34.160041 7 log.go:172] (0xc002a9cd10) Reply frame received for 1 I0505 00:36:34.160094 7 log.go:172] (0xc002a9cd10) (0xc002a4be00) Create stream I0505 00:36:34.160111 7 log.go:172] (0xc002a9cd10) (0xc002a4be00) Stream added, broadcasting: 3 I0505 00:36:34.160906 7 log.go:172] (0xc002a9cd10) Reply frame received for 3 I0505 00:36:34.160940 7 log.go:172] (0xc002a9cd10) (0xc002a4bea0) Create stream I0505 00:36:34.160958 7 log.go:172] (0xc002a9cd10) (0xc002a4bea0) Stream added, broadcasting: 5 I0505 00:36:34.162164 7 log.go:172] (0xc002a9cd10) Reply frame received for 5 I0505 00:36:34.241058 7 log.go:172] (0xc002a9cd10) Data frame received for 3 I0505 00:36:34.241097 7 log.go:172] (0xc002a4be00) (3) Data frame handling I0505 00:36:34.241221 7 log.go:172] (0xc002a9cd10) Data frame received for 5 I0505 00:36:34.241234 7 log.go:172] (0xc002a4bea0) (5) Data frame handling I0505 00:36:34.242776 7 log.go:172] (0xc002a9cd10) Data frame received for 1 I0505 00:36:34.242809 7 log.go:172] (0xc0029c01e0) (1) Data frame handling I0505 00:36:34.242823 7 log.go:172] (0xc0029c01e0) (1) Data frame sent I0505 00:36:34.242838 7 log.go:172] (0xc002a9cd10) (0xc0029c01e0) Stream removed, broadcasting: 1 I0505 00:36:34.242860 7 log.go:172] (0xc002a9cd10) Go away received I0505 00:36:34.243007 7 log.go:172] (0xc002a9cd10) (0xc0029c01e0) Stream removed, broadcasting: 1 I0505 00:36:34.243027 7 log.go:172] (0xc002a9cd10) (0xc002a4be00) Stream removed, broadcasting: 3 I0505 00:36:34.243041 7 log.go:172] (0xc002a9cd10) (0xc002a4bea0) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 5 00:36:34.259: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8179 PodName:var-expansion-a3ddd197-45b1-4e5e-aa19-9f01d3c5dca6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:36:34.259: INFO: >>> kubeConfig: /root/.kube/config I0505 00:36:34.296097 7 log.go:172] (0xc002e16dc0) (0xc00106c460) Create stream I0505 00:36:34.296124 7 log.go:172] (0xc002e16dc0) (0xc00106c460) Stream added, broadcasting: 1 I0505 00:36:34.300393 7 log.go:172] (0xc002e16dc0) Reply frame received for 1 I0505 00:36:34.300435 7 log.go:172] (0xc002e16dc0) (0xc00106c500) Create stream I0505 00:36:34.300446 7 log.go:172] (0xc002e16dc0) (0xc00106c500) Stream added, broadcasting: 3 I0505 00:36:34.304833 7 log.go:172] (0xc002e16dc0) Reply frame received for 3 I0505 00:36:34.304892 7 log.go:172] (0xc002e16dc0) (0xc000d8c0a0) Create stream I0505 00:36:34.304918 7 log.go:172] (0xc002e16dc0) (0xc000d8c0a0) Stream added, broadcasting: 5 I0505 00:36:34.306974 7 log.go:172] (0xc002e16dc0) Reply frame received for 5 I0505 00:36:34.372522 7 log.go:172] (0xc002e16dc0) Data frame received for 3 I0505 00:36:34.372561 7 log.go:172] (0xc00106c500) (3) Data frame handling I0505 00:36:34.372589 7 log.go:172] (0xc002e16dc0) Data frame received for 5 I0505 00:36:34.372609 7 log.go:172] (0xc000d8c0a0) (5) Data frame handling I0505 00:36:34.374161 7 log.go:172] (0xc002e16dc0) Data frame received for 1 I0505 00:36:34.374184 7 log.go:172] (0xc00106c460) (1) Data frame handling I0505 00:36:34.374208 7 log.go:172] (0xc00106c460) (1) Data frame sent I0505 00:36:34.374308 7 log.go:172] (0xc002e16dc0) (0xc00106c460) Stream removed, broadcasting: 1 I0505 00:36:34.374358 7 log.go:172] (0xc002e16dc0) Go away received I0505 00:36:34.374469 7 log.go:172] (0xc002e16dc0) (0xc00106c460) Stream removed, broadcasting: 1 I0505 00:36:34.374490 7 log.go:172] (0xc002e16dc0) (0xc00106c500) Stream removed, broadcasting: 3 I0505 00:36:34.374501 7 log.go:172] (0xc002e16dc0) (0xc000d8c0a0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 5 00:36:34.881: INFO: Successfully updated pod "var-expansion-a3ddd197-45b1-4e5e-aa19-9f01d3c5dca6" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 5 00:36:34.942: INFO: Deleting pod "var-expansion-a3ddd197-45b1-4e5e-aa19-9f01d3c5dca6" in namespace "var-expansion-8179" May 5 00:36:34.947: INFO: Wait up to 5m0s for pod "var-expansion-a3ddd197-45b1-4e5e-aa19-9f01d3c5dca6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:16.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8179" for this suite. • [SLOW TEST:47.112 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":178,"skipped":3014,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:16.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:37:17.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5" in namespace "projected-2156" to be "Succeeded or Failed" May 5 00:37:17.063: INFO: Pod "downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630319ms May 5 00:37:19.089: INFO: Pod "downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036604832s May 5 00:37:21.094: INFO: Pod "downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041393684s STEP: Saw pod success May 5 00:37:21.094: INFO: Pod "downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5" satisfied condition "Succeeded or Failed" May 5 00:37:21.098: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5 container client-container: STEP: delete the pod May 5 00:37:21.170: INFO: Waiting for pod downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5 to disappear May 5 00:37:21.195: INFO: Pod downwardapi-volume-f3e439be-8d15-4f30-9eeb-2148672fd7e5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:21.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2156" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3018,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:21.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 5 00:37:25.428: INFO: Pod pod-hostip-862bf1ef-3e55-47d6-ac6d-589443d9b9c4 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:25.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5459" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":3041,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:25.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 5 00:37:25.490: INFO: Waiting up to 5m0s for pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8" in namespace "emptydir-2982" to be "Succeeded or Failed" May 5 00:37:25.502: INFO: Pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.714021ms May 5 00:37:27.504: INFO: Pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013857363s May 5 00:37:29.553: INFO: Pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.062559302s May 5 00:37:31.558: INFO: Pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067195169s STEP: Saw pod success May 5 00:37:31.558: INFO: Pod "pod-88c6819d-354f-4145-85c5-0a2f69341bf8" satisfied condition "Succeeded or Failed" May 5 00:37:31.560: INFO: Trying to get logs from node latest-worker2 pod pod-88c6819d-354f-4145-85c5-0a2f69341bf8 container test-container: STEP: delete the pod May 5 00:37:31.633: INFO: Waiting for pod pod-88c6819d-354f-4145-85c5-0a2f69341bf8 to disappear May 5 00:37:31.638: INFO: Pod pod-88c6819d-354f-4145-85c5-0a2f69341bf8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:31.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2982" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3044,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:31.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-7215b76b-611f-438e-afc7-2cac330d61a0 STEP: Creating a pod to test consume secrets May 5 00:37:31.756: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482" in namespace "projected-8464" to be "Succeeded or Failed" May 5 00:37:31.764: INFO: Pod "pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482": Phase="Pending", Reason="", readiness=false. Elapsed: 7.614616ms May 5 00:37:33.817: INFO: Pod "pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060884218s May 5 00:37:35.821: INFO: Pod "pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064826834s STEP: Saw pod success May 5 00:37:35.821: INFO: Pod "pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482" satisfied condition "Succeeded or Failed" May 5 00:37:35.824: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482 container projected-secret-volume-test: STEP: delete the pod May 5 00:37:35.957: INFO: Waiting for pod pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482 to disappear May 5 00:37:35.962: INFO: Pod pod-projected-secrets-c2107b66-096c-4de5-b204-85a617b91482 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:35.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8464" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3055,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:35.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:36.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8073" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":183,"skipped":3067,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:36.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 5 00:37:36.318: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 00:37:36.329: INFO: Waiting for terminating namespaces to be deleted... May 5 00:37:36.332: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 5 00:37:36.336: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:37:36.337: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:37:36.337: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:37:36.337: INFO: Container kube-proxy ready: true, restart count 0 May 5 00:37:36.337: INFO: pod-qos-class-b7305ab4-784d-402d-91e3-89f5c0a8dd50 from pods-8073 started at 2020-05-05 00:37:36 +0000 UTC (1 container statuses recorded) May 5 00:37:36.337: INFO: Container agnhost ready: false, restart count 0 May 5 00:37:36.337: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 5 00:37:36.340: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:37:36.340: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:37:36.340: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:37:36.340: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-08fa6941-86e0-4731-a57c-f41414618b67 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-08fa6941-86e0-4731-a57c-f41414618b67 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-08fa6941-86e0-4731-a57c-f41414618b67 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:46.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5332" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.319 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":184,"skipped":3069,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:46.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 5 00:37:46.645: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:54.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2825" for this suite. • [SLOW TEST:7.952 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":185,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:54.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:37:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-205" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":186,"skipped":3140,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:37:54.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 5 00:37:54.749: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:38:05.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-313" for this suite. • [SLOW TEST:10.586 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:38:05.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:38:06.060: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:38:08.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:38:10.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724235886, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:38:13.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:38:13.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-773" for this suite. STEP: Destroying namespace "webhook-773-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.133 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":188,"skipped":3173,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:38:13.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9971 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9971 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9971 May 5 00:38:13.503: INFO: Found 0 stateful pods, waiting for 1 May 5 00:38:23.508: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 5 00:38:23.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:38:23.796: INFO: stderr: "I0505 00:38:23.650276 2304 log.go:172] (0xc00067a420) (0xc0005585a0) Create stream\nI0505 00:38:23.650344 2304 log.go:172] (0xc00067a420) (0xc0005585a0) Stream added, broadcasting: 1\nI0505 00:38:23.652227 2304 log.go:172] (0xc00067a420) Reply frame received for 1\nI0505 00:38:23.652276 2304 log.go:172] (0xc00067a420) (0xc00047a1e0) Create stream\nI0505 00:38:23.652296 2304 log.go:172] (0xc00067a420) (0xc00047a1e0) Stream added, broadcasting: 3\nI0505 00:38:23.653372 2304 log.go:172] (0xc00067a420) Reply frame received for 3\nI0505 00:38:23.653423 2304 log.go:172] (0xc00067a420) (0xc000424dc0) Create stream\nI0505 00:38:23.653449 2304 log.go:172] (0xc00067a420) (0xc000424dc0) Stream added, broadcasting: 5\nI0505 00:38:23.654466 2304 log.go:172] (0xc00067a420) Reply frame received for 5\nI0505 00:38:23.756772 2304 log.go:172] (0xc00067a420) Data frame received for 5\nI0505 00:38:23.756804 2304 log.go:172] (0xc000424dc0) (5) Data frame handling\nI0505 00:38:23.756822 2304 log.go:172] (0xc000424dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:38:23.789440 2304 log.go:172] (0xc00067a420) Data frame received for 5\nI0505 00:38:23.789474 2304 log.go:172] (0xc000424dc0) (5) Data frame handling\nI0505 00:38:23.789510 2304 log.go:172] (0xc00067a420) Data frame received for 3\nI0505 00:38:23.789522 2304 log.go:172] (0xc00047a1e0) (3) Data frame handling\nI0505 00:38:23.789536 2304 log.go:172] (0xc00047a1e0) (3) Data frame sent\nI0505 00:38:23.789639 2304 log.go:172] (0xc00067a420) Data frame received for 3\nI0505 00:38:23.789661 2304 log.go:172] (0xc00047a1e0) (3) Data frame handling\nI0505 00:38:23.791301 2304 log.go:172] (0xc00067a420) Data frame received for 1\nI0505 00:38:23.791343 2304 log.go:172] (0xc0005585a0) (1) Data frame handling\nI0505 00:38:23.791394 2304 log.go:172] (0xc0005585a0) (1) Data frame sent\nI0505 00:38:23.791426 2304 log.go:172] (0xc00067a420) (0xc0005585a0) Stream removed, broadcasting: 1\nI0505 00:38:23.791468 2304 log.go:172] (0xc00067a420) Go away received\nI0505 00:38:23.791809 2304 log.go:172] (0xc00067a420) (0xc0005585a0) Stream removed, broadcasting: 1\nI0505 00:38:23.791832 2304 log.go:172] (0xc00067a420) (0xc00047a1e0) Stream removed, broadcasting: 3\nI0505 00:38:23.791843 2304 log.go:172] (0xc00067a420) (0xc000424dc0) Stream removed, broadcasting: 5\n" May 5 00:38:23.796: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:38:23.796: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:38:23.801: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 5 00:38:33.806: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 00:38:33.806: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:38:33.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999705s May 5 00:38:34.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978912581s May 5 00:38:35.849: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974213071s May 5 00:38:36.854: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968279534s May 5 00:38:37.859: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963205316s May 5 00:38:38.864: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.958550758s May 5 00:38:39.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.953550526s May 5 00:38:40.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.948274094s May 5 00:38:41.901: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.922072429s May 5 00:38:42.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 916.633629ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9971 May 5 00:38:43.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:38:44.138: INFO: stderr: "I0505 00:38:44.052937 2325 log.go:172] (0xc0008eae70) (0xc000a4a8c0) Create stream\nI0505 00:38:44.052992 2325 log.go:172] (0xc0008eae70) (0xc000a4a8c0) Stream added, broadcasting: 1\nI0505 00:38:44.058428 2325 log.go:172] (0xc0008eae70) Reply frame received for 1\nI0505 00:38:44.058500 2325 log.go:172] (0xc0008eae70) (0xc0005d23c0) Create stream\nI0505 00:38:44.058528 2325 log.go:172] (0xc0008eae70) (0xc0005d23c0) Stream added, broadcasting: 3\nI0505 00:38:44.059405 2325 log.go:172] (0xc0008eae70) Reply frame received for 3\nI0505 00:38:44.059469 2325 log.go:172] (0xc0008eae70) (0xc00053cf00) Create stream\nI0505 00:38:44.059507 2325 log.go:172] (0xc0008eae70) (0xc00053cf00) Stream added, broadcasting: 5\nI0505 00:38:44.060286 2325 log.go:172] (0xc0008eae70) Reply frame received for 5\nI0505 00:38:44.131620 2325 log.go:172] (0xc0008eae70) Data frame received for 5\nI0505 00:38:44.131645 2325 log.go:172] (0xc00053cf00) (5) Data frame handling\nI0505 00:38:44.131655 2325 log.go:172] (0xc00053cf00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:38:44.131663 2325 log.go:172] (0xc0008eae70) Data frame received for 5\nI0505 00:38:44.131689 2325 log.go:172] (0xc00053cf00) (5) Data frame handling\nI0505 00:38:44.131718 2325 log.go:172] (0xc0008eae70) Data frame received for 3\nI0505 00:38:44.131752 2325 log.go:172] (0xc0005d23c0) (3) Data frame handling\nI0505 00:38:44.131777 2325 log.go:172] (0xc0005d23c0) (3) Data frame sent\nI0505 00:38:44.131794 2325 log.go:172] (0xc0008eae70) Data frame received for 3\nI0505 00:38:44.131808 2325 log.go:172] (0xc0005d23c0) (3) Data frame handling\nI0505 00:38:44.132889 2325 log.go:172] (0xc0008eae70) Data frame received for 1\nI0505 00:38:44.132912 2325 log.go:172] (0xc000a4a8c0) (1) Data frame handling\nI0505 00:38:44.132931 2325 log.go:172] (0xc000a4a8c0) (1) Data frame sent\nI0505 00:38:44.132947 2325 log.go:172] (0xc0008eae70) (0xc000a4a8c0) Stream removed, broadcasting: 1\nI0505 00:38:44.132980 2325 log.go:172] (0xc0008eae70) Go away received\nI0505 00:38:44.133408 2325 log.go:172] (0xc0008eae70) (0xc000a4a8c0) Stream removed, broadcasting: 1\nI0505 00:38:44.133428 2325 log.go:172] (0xc0008eae70) (0xc0005d23c0) Stream removed, broadcasting: 3\nI0505 00:38:44.133441 2325 log.go:172] (0xc0008eae70) (0xc00053cf00) Stream removed, broadcasting: 5\n" May 5 00:38:44.138: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:38:44.138: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:38:44.143: INFO: Found 1 stateful pods, waiting for 3 May 5 00:38:54.149: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:38:54.149: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:38:54.149: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 5 00:38:54.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:38:54.388: INFO: stderr: "I0505 00:38:54.293784 2348 log.go:172] (0xc000748bb0) (0xc0004e2280) Create stream\nI0505 00:38:54.293835 2348 log.go:172] (0xc000748bb0) (0xc0004e2280) Stream added, broadcasting: 1\nI0505 00:38:54.296025 2348 log.go:172] (0xc000748bb0) Reply frame received for 1\nI0505 00:38:54.296073 2348 log.go:172] (0xc000748bb0) (0xc000436dc0) Create stream\nI0505 00:38:54.296084 2348 log.go:172] (0xc000748bb0) (0xc000436dc0) Stream added, broadcasting: 3\nI0505 00:38:54.297039 2348 log.go:172] (0xc000748bb0) Reply frame received for 3\nI0505 00:38:54.297077 2348 log.go:172] (0xc000748bb0) (0xc00013b860) Create stream\nI0505 00:38:54.297087 2348 log.go:172] (0xc000748bb0) (0xc00013b860) Stream added, broadcasting: 5\nI0505 00:38:54.298121 2348 log.go:172] (0xc000748bb0) Reply frame received for 5\nI0505 00:38:54.380414 2348 log.go:172] (0xc000748bb0) Data frame received for 5\nI0505 00:38:54.380458 2348 log.go:172] (0xc00013b860) (5) Data frame handling\nI0505 00:38:54.380497 2348 log.go:172] (0xc00013b860) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:38:54.380848 2348 log.go:172] (0xc000748bb0) Data frame received for 3\nI0505 00:38:54.380884 2348 log.go:172] (0xc000436dc0) (3) Data frame handling\nI0505 00:38:54.380910 2348 log.go:172] (0xc000748bb0) Data frame received for 5\nI0505 00:38:54.380936 2348 log.go:172] (0xc00013b860) (5) Data frame handling\nI0505 00:38:54.380973 2348 log.go:172] (0xc000436dc0) (3) Data frame sent\nI0505 00:38:54.381089 2348 log.go:172] (0xc000748bb0) Data frame received for 3\nI0505 00:38:54.381106 2348 log.go:172] (0xc000436dc0) (3) Data frame handling\nI0505 00:38:54.383143 2348 log.go:172] (0xc000748bb0) Data frame received for 1\nI0505 00:38:54.383173 2348 log.go:172] (0xc0004e2280) (1) Data frame handling\nI0505 00:38:54.383189 2348 log.go:172] (0xc0004e2280) (1) Data frame sent\nI0505 00:38:54.383217 2348 log.go:172] (0xc000748bb0) (0xc0004e2280) Stream removed, broadcasting: 1\nI0505 00:38:54.383266 2348 log.go:172] (0xc000748bb0) Go away received\nI0505 00:38:54.383602 2348 log.go:172] (0xc000748bb0) (0xc0004e2280) Stream removed, broadcasting: 1\nI0505 00:38:54.383626 2348 log.go:172] (0xc000748bb0) (0xc000436dc0) Stream removed, broadcasting: 3\nI0505 00:38:54.383640 2348 log.go:172] (0xc000748bb0) (0xc00013b860) Stream removed, broadcasting: 5\n" May 5 00:38:54.388: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:38:54.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:38:54.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:38:54.678: INFO: stderr: "I0505 00:38:54.566012 2369 log.go:172] (0xc000a66e70) (0xc0006b6aa0) Create stream\nI0505 00:38:54.566082 2369 log.go:172] (0xc000a66e70) (0xc0006b6aa0) Stream added, broadcasting: 1\nI0505 00:38:54.569018 2369 log.go:172] (0xc000a66e70) Reply frame received for 1\nI0505 00:38:54.569076 2369 log.go:172] (0xc000a66e70) (0xc00056f9a0) Create stream\nI0505 00:38:54.569749 2369 log.go:172] (0xc000a66e70) (0xc00056f9a0) Stream added, broadcasting: 3\nI0505 00:38:54.571104 2369 log.go:172] (0xc000a66e70) Reply frame received for 3\nI0505 00:38:54.571151 2369 log.go:172] (0xc000a66e70) (0xc0006b6fa0) Create stream\nI0505 00:38:54.571161 2369 log.go:172] (0xc000a66e70) (0xc0006b6fa0) Stream added, broadcasting: 5\nI0505 00:38:54.572461 2369 log.go:172] (0xc000a66e70) Reply frame received for 5\nI0505 00:38:54.634293 2369 log.go:172] (0xc000a66e70) Data frame received for 5\nI0505 00:38:54.634337 2369 log.go:172] (0xc0006b6fa0) (5) Data frame handling\nI0505 00:38:54.634367 2369 log.go:172] (0xc0006b6fa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:38:54.668590 2369 log.go:172] (0xc000a66e70) Data frame received for 3\nI0505 00:38:54.668615 2369 log.go:172] (0xc00056f9a0) (3) Data frame handling\nI0505 00:38:54.668628 2369 log.go:172] (0xc00056f9a0) (3) Data frame sent\nI0505 00:38:54.668928 2369 log.go:172] (0xc000a66e70) Data frame received for 5\nI0505 00:38:54.668940 2369 log.go:172] (0xc0006b6fa0) (5) Data frame handling\nI0505 00:38:54.669786 2369 log.go:172] (0xc000a66e70) Data frame received for 3\nI0505 00:38:54.669798 2369 log.go:172] (0xc00056f9a0) (3) Data frame handling\nI0505 00:38:54.671723 2369 log.go:172] (0xc000a66e70) Data frame received for 1\nI0505 00:38:54.671765 2369 log.go:172] (0xc0006b6aa0) (1) Data frame handling\nI0505 00:38:54.671800 2369 log.go:172] (0xc0006b6aa0) (1) Data frame sent\nI0505 00:38:54.671838 2369 log.go:172] (0xc000a66e70) (0xc0006b6aa0) Stream removed, broadcasting: 1\nI0505 00:38:54.671886 2369 log.go:172] (0xc000a66e70) Go away received\nI0505 00:38:54.672327 2369 log.go:172] (0xc000a66e70) (0xc0006b6aa0) Stream removed, broadcasting: 1\nI0505 00:38:54.672354 2369 log.go:172] (0xc000a66e70) (0xc00056f9a0) Stream removed, broadcasting: 3\nI0505 00:38:54.672368 2369 log.go:172] (0xc000a66e70) (0xc0006b6fa0) Stream removed, broadcasting: 5\n" May 5 00:38:54.678: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:38:54.678: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:38:54.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:38:54.947: INFO: stderr: "I0505 00:38:54.816814 2389 log.go:172] (0xc000b31130) (0xc00084cfa0) Create stream\nI0505 00:38:54.816868 2389 log.go:172] (0xc000b31130) (0xc00084cfa0) Stream added, broadcasting: 1\nI0505 00:38:54.819965 2389 log.go:172] (0xc000b31130) Reply frame received for 1\nI0505 00:38:54.820001 2389 log.go:172] (0xc000b31130) (0xc00070e820) Create stream\nI0505 00:38:54.820014 2389 log.go:172] (0xc000b31130) (0xc00070e820) Stream added, broadcasting: 3\nI0505 00:38:54.820920 2389 log.go:172] (0xc000b31130) Reply frame received for 3\nI0505 00:38:54.820960 2389 log.go:172] (0xc000b31130) (0xc00084d540) Create stream\nI0505 00:38:54.820975 2389 log.go:172] (0xc000b31130) (0xc00084d540) Stream added, broadcasting: 5\nI0505 00:38:54.822179 2389 log.go:172] (0xc000b31130) Reply frame received for 5\nI0505 00:38:54.900687 2389 log.go:172] (0xc000b31130) Data frame received for 5\nI0505 00:38:54.900707 2389 log.go:172] (0xc00084d540) (5) Data frame handling\nI0505 00:38:54.900721 2389 log.go:172] (0xc00084d540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:38:54.942512 2389 log.go:172] (0xc000b31130) Data frame received for 5\nI0505 00:38:54.942654 2389 log.go:172] (0xc00084d540) (5) Data frame handling\nI0505 00:38:54.942705 2389 log.go:172] (0xc000b31130) Data frame received for 3\nI0505 00:38:54.942741 2389 log.go:172] (0xc00070e820) (3) Data frame handling\nI0505 00:38:54.942764 2389 log.go:172] (0xc00070e820) (3) Data frame sent\nI0505 00:38:54.942781 2389 log.go:172] (0xc000b31130) Data frame received for 3\nI0505 00:38:54.942793 2389 log.go:172] (0xc00070e820) (3) Data frame handling\nI0505 00:38:54.944144 2389 log.go:172] (0xc000b31130) Data frame received for 1\nI0505 00:38:54.944173 2389 log.go:172] (0xc00084cfa0) (1) Data frame handling\nI0505 00:38:54.944198 2389 log.go:172] (0xc00084cfa0) (1) Data frame sent\nI0505 00:38:54.944227 2389 log.go:172] (0xc000b31130) (0xc00084cfa0) Stream removed, broadcasting: 1\nI0505 00:38:54.944254 2389 log.go:172] (0xc000b31130) Go away received\nI0505 00:38:54.944464 2389 log.go:172] (0xc000b31130) (0xc00084cfa0) Stream removed, broadcasting: 1\nI0505 00:38:54.944478 2389 log.go:172] (0xc000b31130) (0xc00070e820) Stream removed, broadcasting: 3\nI0505 00:38:54.944484 2389 log.go:172] (0xc000b31130) (0xc00084d540) Stream removed, broadcasting: 5\n" May 5 00:38:54.947: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:38:54.947: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:38:54.947: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:38:55.005: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 5 00:39:05.014: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 00:39:05.014: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 5 00:39:05.014: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 5 00:39:05.057: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999556s May 5 00:39:06.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.963755673s May 5 00:39:07.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958738222s May 5 00:39:08.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.953457518s May 5 00:39:09.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.939210996s May 5 00:39:10.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933686163s May 5 00:39:11.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929150353s May 5 00:39:12.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.924348786s May 5 00:39:13.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919510228s May 5 00:39:14.116: INFO: Verifying statefulset ss doesn't scale past 3 for another 914.666091ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9971 May 5 00:39:15.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:39:15.314: INFO: stderr: "I0505 00:39:15.244903 2409 log.go:172] (0xc000647080) (0xc000b605a0) Create stream\nI0505 00:39:15.244952 2409 log.go:172] (0xc000647080) (0xc000b605a0) Stream added, broadcasting: 1\nI0505 00:39:15.247259 2409 log.go:172] (0xc000647080) Reply frame received for 1\nI0505 00:39:15.247296 2409 log.go:172] (0xc000647080) (0xc000c6a0a0) Create stream\nI0505 00:39:15.247316 2409 log.go:172] (0xc000647080) (0xc000c6a0a0) Stream added, broadcasting: 3\nI0505 00:39:15.248049 2409 log.go:172] (0xc000647080) Reply frame received for 3\nI0505 00:39:15.248069 2409 log.go:172] (0xc000647080) (0xc0006050e0) Create stream\nI0505 00:39:15.248083 2409 log.go:172] (0xc000647080) (0xc0006050e0) Stream added, broadcasting: 5\nI0505 00:39:15.248662 2409 log.go:172] (0xc000647080) Reply frame received for 5\nI0505 00:39:15.307119 2409 log.go:172] (0xc000647080) Data frame received for 5\nI0505 00:39:15.307161 2409 log.go:172] (0xc0006050e0) (5) Data frame handling\nI0505 00:39:15.307178 2409 log.go:172] (0xc0006050e0) (5) Data frame sent\nI0505 00:39:15.307189 2409 log.go:172] (0xc000647080) Data frame received for 5\nI0505 00:39:15.307216 2409 log.go:172] (0xc0006050e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:39:15.307255 2409 log.go:172] (0xc000647080) Data frame received for 3\nI0505 00:39:15.307285 2409 log.go:172] (0xc000c6a0a0) (3) Data frame handling\nI0505 00:39:15.307342 2409 log.go:172] (0xc000c6a0a0) (3) Data frame sent\nI0505 00:39:15.307368 2409 log.go:172] (0xc000647080) Data frame received for 3\nI0505 00:39:15.307388 2409 log.go:172] (0xc000c6a0a0) (3) Data frame handling\nI0505 00:39:15.308670 2409 log.go:172] (0xc000647080) Data frame received for 1\nI0505 00:39:15.308691 2409 log.go:172] (0xc000b605a0) (1) Data frame handling\nI0505 00:39:15.308710 2409 log.go:172] (0xc000b605a0) (1) Data frame sent\nI0505 00:39:15.308730 2409 log.go:172] (0xc000647080) (0xc000b605a0) Stream removed, broadcasting: 1\nI0505 00:39:15.308752 2409 log.go:172] (0xc000647080) Go away received\nI0505 00:39:15.309321 2409 log.go:172] (0xc000647080) (0xc000b605a0) Stream removed, broadcasting: 1\nI0505 00:39:15.309341 2409 log.go:172] (0xc000647080) (0xc000c6a0a0) Stream removed, broadcasting: 3\nI0505 00:39:15.309352 2409 log.go:172] (0xc000647080) (0xc0006050e0) Stream removed, broadcasting: 5\n" May 5 00:39:15.314: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:39:15.314: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:39:15.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:39:15.552: INFO: stderr: "I0505 00:39:15.470681 2433 log.go:172] (0xc00095bb80) (0xc000b1e0a0) Create stream\nI0505 00:39:15.470737 2433 log.go:172] (0xc00095bb80) (0xc000b1e0a0) Stream added, broadcasting: 1\nI0505 00:39:15.475167 2433 log.go:172] (0xc00095bb80) Reply frame received for 1\nI0505 00:39:15.475215 2433 log.go:172] (0xc00095bb80) (0xc00063eaa0) Create stream\nI0505 00:39:15.475228 2433 log.go:172] (0xc00095bb80) (0xc00063eaa0) Stream added, broadcasting: 3\nI0505 00:39:15.476122 2433 log.go:172] (0xc00095bb80) Reply frame received for 3\nI0505 00:39:15.476158 2433 log.go:172] (0xc00095bb80) (0xc0006380a0) Create stream\nI0505 00:39:15.476168 2433 log.go:172] (0xc00095bb80) (0xc0006380a0) Stream added, broadcasting: 5\nI0505 00:39:15.476983 2433 log.go:172] (0xc00095bb80) Reply frame received for 5\nI0505 00:39:15.545491 2433 log.go:172] (0xc00095bb80) Data frame received for 3\nI0505 00:39:15.545532 2433 log.go:172] (0xc00063eaa0) (3) Data frame handling\nI0505 00:39:15.545561 2433 log.go:172] (0xc00063eaa0) (3) Data frame sent\nI0505 00:39:15.545577 2433 log.go:172] (0xc00095bb80) Data frame received for 3\nI0505 00:39:15.545592 2433 log.go:172] (0xc00063eaa0) (3) Data frame handling\nI0505 00:39:15.545743 2433 log.go:172] (0xc00095bb80) Data frame received for 5\nI0505 00:39:15.545766 2433 log.go:172] (0xc0006380a0) (5) Data frame handling\nI0505 00:39:15.545782 2433 log.go:172] (0xc0006380a0) (5) Data frame sent\nI0505 00:39:15.545790 2433 log.go:172] (0xc00095bb80) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:39:15.545797 2433 log.go:172] (0xc0006380a0) (5) Data frame handling\nI0505 00:39:15.547332 2433 log.go:172] (0xc00095bb80) Data frame received for 1\nI0505 00:39:15.547358 2433 log.go:172] (0xc000b1e0a0) (1) Data frame handling\nI0505 00:39:15.547368 2433 log.go:172] (0xc000b1e0a0) (1) Data frame sent\nI0505 00:39:15.547395 2433 log.go:172] (0xc00095bb80) (0xc000b1e0a0) Stream removed, broadcasting: 1\nI0505 00:39:15.547430 2433 log.go:172] (0xc00095bb80) Go away received\nI0505 00:39:15.547749 2433 log.go:172] (0xc00095bb80) (0xc000b1e0a0) Stream removed, broadcasting: 1\nI0505 00:39:15.547769 2433 log.go:172] (0xc00095bb80) (0xc00063eaa0) Stream removed, broadcasting: 3\nI0505 00:39:15.547778 2433 log.go:172] (0xc00095bb80) (0xc0006380a0) Stream removed, broadcasting: 5\n" May 5 00:39:15.552: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:39:15.552: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:39:15.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9971 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:39:15.776: INFO: stderr: "I0505 00:39:15.683049 2454 log.go:172] (0xc000673130) (0xc00036fe00) Create stream\nI0505 00:39:15.683309 2454 log.go:172] (0xc000673130) (0xc00036fe00) Stream added, broadcasting: 1\nI0505 00:39:15.687252 2454 log.go:172] (0xc000673130) Reply frame received for 1\nI0505 00:39:15.687308 2454 log.go:172] (0xc000673130) (0xc00058c320) Create stream\nI0505 00:39:15.687325 2454 log.go:172] (0xc000673130) (0xc00058c320) Stream added, broadcasting: 3\nI0505 00:39:15.688245 2454 log.go:172] (0xc000673130) Reply frame received for 3\nI0505 00:39:15.688291 2454 log.go:172] (0xc000673130) (0xc00028ee60) Create stream\nI0505 00:39:15.688304 2454 log.go:172] (0xc000673130) (0xc00028ee60) Stream added, broadcasting: 5\nI0505 00:39:15.689380 2454 log.go:172] (0xc000673130) Reply frame received for 5\nI0505 00:39:15.768222 2454 log.go:172] (0xc000673130) Data frame received for 5\nI0505 00:39:15.768265 2454 log.go:172] (0xc000673130) Data frame received for 3\nI0505 00:39:15.768317 2454 log.go:172] (0xc00058c320) (3) Data frame handling\nI0505 00:39:15.768341 2454 log.go:172] (0xc00058c320) (3) Data frame sent\nI0505 00:39:15.768382 2454 log.go:172] (0xc00028ee60) (5) Data frame handling\nI0505 00:39:15.768416 2454 log.go:172] (0xc00028ee60) (5) Data frame sent\nI0505 00:39:15.768458 2454 log.go:172] (0xc000673130) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:39:15.768483 2454 log.go:172] (0xc00028ee60) (5) Data frame handling\nI0505 00:39:15.768537 2454 log.go:172] (0xc000673130) Data frame received for 3\nI0505 00:39:15.768555 2454 log.go:172] (0xc00058c320) (3) Data frame handling\nI0505 00:39:15.770052 2454 log.go:172] (0xc000673130) Data frame received for 1\nI0505 00:39:15.770080 2454 log.go:172] (0xc00036fe00) (1) Data frame handling\nI0505 00:39:15.770100 2454 log.go:172] (0xc00036fe00) (1) Data frame sent\nI0505 00:39:15.770263 2454 log.go:172] (0xc000673130) (0xc00036fe00) Stream removed, broadcasting: 1\nI0505 00:39:15.770445 2454 log.go:172] (0xc000673130) Go away received\nI0505 00:39:15.770607 2454 log.go:172] (0xc000673130) (0xc00036fe00) Stream removed, broadcasting: 1\nI0505 00:39:15.770640 2454 log.go:172] (0xc000673130) (0xc00058c320) Stream removed, broadcasting: 3\nI0505 00:39:15.770651 2454 log.go:172] (0xc000673130) (0xc00028ee60) Stream removed, broadcasting: 5\n" May 5 00:39:15.776: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:39:15.776: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:39:15.776: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 00:39:35.795: INFO: Deleting all statefulset in ns statefulset-9971 May 5 00:39:35.797: INFO: Scaling statefulset ss to 0 May 5 00:39:35.806: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:39:35.808: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:39:35.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9971" for this suite. • [SLOW TEST:82.460 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":189,"skipped":3174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:39:35.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 00:39:43.992: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:44.028: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:46.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:46.034: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:48.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:48.033: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:50.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:50.032: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:52.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:52.032: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:54.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:54.033: INFO: Pod pod-with-prestop-http-hook still exists May 5 00:39:56.028: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 00:39:56.032: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:39:56.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-296" for this suite. • [SLOW TEST:20.219 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3226,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:39:56.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 5 00:39:56.117: INFO: Waiting up to 5m0s for pod "pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7" in namespace "emptydir-9386" to be "Succeeded or Failed" May 5 00:39:56.171: INFO: Pod "pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.869188ms May 5 00:39:58.175: INFO: Pod "pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058191782s May 5 00:40:00.179: INFO: Pod "pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062511221s STEP: Saw pod success May 5 00:40:00.179: INFO: Pod "pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7" satisfied condition "Succeeded or Failed" May 5 00:40:00.182: INFO: Trying to get logs from node latest-worker2 pod pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7 container test-container: STEP: delete the pod May 5 00:40:00.243: INFO: Waiting for pod pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7 to disappear May 5 00:40:00.461: INFO: Pod pod-fb19bfb2-6c5b-4cea-ab70-3c46f55134b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:40:00.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9386" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3227,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:40:00.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171 May 5 00:40:00.673: INFO: Pod name my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171: Found 0 pods out of 1 May 5 00:40:05.677: INFO: Pod name my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171: Found 1 pods out of 1 May 5 00:40:05.677: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171" are running May 5 00:40:05.680: INFO: Pod "my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171-6vptk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 00:40:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 00:40:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 00:40:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 00:40:00 +0000 UTC Reason: Message:}]) May 5 00:40:05.680: INFO: Trying to dial the pod May 5 00:40:10.690: INFO: Controller my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171: Got expected result from replica 1 [my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171-6vptk]: "my-hostname-basic-e74f0dc8-ab9f-4816-a5f1-7986a18ef171-6vptk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:40:10.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1044" for this suite. • [SLOW TEST:10.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":192,"skipped":3240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:40:10.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:40:22.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7320" for this suite. • [SLOW TEST:11.302 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":193,"skipped":3282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:40:22.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-ngdt STEP: Creating a pod to test atomic-volume-subpath May 5 00:40:22.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ngdt" in namespace "subpath-9333" to be "Succeeded or Failed" May 5 00:40:22.191: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Pending", Reason="", readiness=false. Elapsed: 21.957889ms May 5 00:40:24.196: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026864633s May 5 00:40:26.201: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 4.031834113s May 5 00:40:28.206: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 6.036786799s May 5 00:40:30.210: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 8.040999801s May 5 00:40:32.215: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 10.045308334s May 5 00:40:34.219: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 12.049520961s May 5 00:40:36.223: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 14.053385732s May 5 00:40:38.226: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 16.056752139s May 5 00:40:40.231: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 18.061232689s May 5 00:40:42.235: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 20.065709819s May 5 00:40:44.239: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Running", Reason="", readiness=true. Elapsed: 22.070067497s May 5 00:40:46.244: INFO: Pod "pod-subpath-test-configmap-ngdt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074568019s STEP: Saw pod success May 5 00:40:46.244: INFO: Pod "pod-subpath-test-configmap-ngdt" satisfied condition "Succeeded or Failed" May 5 00:40:46.247: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-ngdt container test-container-subpath-configmap-ngdt: STEP: delete the pod May 5 00:40:46.287: INFO: Waiting for pod pod-subpath-test-configmap-ngdt to disappear May 5 00:40:46.322: INFO: Pod pod-subpath-test-configmap-ngdt no longer exists STEP: Deleting pod pod-subpath-test-configmap-ngdt May 5 00:40:46.322: INFO: Deleting pod "pod-subpath-test-configmap-ngdt" in namespace "subpath-9333" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:40:46.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9333" for this suite. • [SLOW TEST:24.306 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":194,"skipped":3314,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:40:46.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 5 00:40:46.387: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:40:54.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1399" for this suite. • [SLOW TEST:7.874 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":195,"skipped":3326,"failed":0} SS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:40:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 5 00:40:54.290: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 5 00:40:54.295: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 5 00:40:54.295: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 5 00:40:54.323: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 5 00:40:54.323: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 5 00:40:54.624: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 5 00:40:54.624: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 5 00:41:02.070: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:02.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-540" for this suite. • [SLOW TEST:7.979 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":196,"skipped":3328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:02.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 5 00:41:02.340: INFO: Waiting up to 5m0s for pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190" in namespace "emptydir-1042" to be "Succeeded or Failed" May 5 00:41:02.342: INFO: Pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252123ms May 5 00:41:04.496: INFO: Pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156327778s May 5 00:41:06.507: INFO: Pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190": Phase="Running", Reason="", readiness=true. Elapsed: 4.167838905s May 5 00:41:08.529: INFO: Pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189835706s STEP: Saw pod success May 5 00:41:08.530: INFO: Pod "pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190" satisfied condition "Succeeded or Failed" May 5 00:41:08.532: INFO: Trying to get logs from node latest-worker2 pod pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190 container test-container: STEP: delete the pod May 5 00:41:08.715: INFO: Waiting for pod pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190 to disappear May 5 00:41:08.888: INFO: Pod pod-fe761b4d-baa6-4e7a-a409-0aa3cd2f2190 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:08.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1042" for this suite. • [SLOW TEST:6.872 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3359,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:09.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 5 00:41:09.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 5 00:41:09.748: INFO: stderr: "" May 5 00:41:09.748: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8977" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":198,"skipped":3364,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:09.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1fdb66d2-3d8f-4d1a-a0ad-e3dcd04a6022 STEP: Creating a pod to test consume secrets May 5 00:41:10.126: INFO: Waiting up to 5m0s for pod "pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c" in namespace "secrets-7938" to be "Succeeded or Failed" May 5 00:41:10.138: INFO: Pod "pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.474435ms May 5 00:41:12.142: INFO: Pod "pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015973755s May 5 00:41:14.145: INFO: Pod "pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018674689s STEP: Saw pod success May 5 00:41:14.145: INFO: Pod "pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c" satisfied condition "Succeeded or Failed" May 5 00:41:14.150: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c container secret-env-test: STEP: delete the pod May 5 00:41:14.205: INFO: Waiting for pod pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c to disappear May 5 00:41:14.210: INFO: Pod pod-secrets-41763ef6-90d4-46a4-a3ca-598fc7b0cc1c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:14.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7938" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3371,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:14.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-6ecfd623-37b4-4e39-b2e7-566f792ac9ca STEP: Creating configMap with name cm-test-opt-upd-6136418e-3531-4c7b-9aa9-cb508e7d321c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6ecfd623-37b4-4e39-b2e7-566f792ac9ca STEP: Updating configmap cm-test-opt-upd-6136418e-3531-4c7b-9aa9-cb508e7d321c STEP: Creating configMap with name cm-test-opt-create-8649704e-5340-447b-9143-e0fc060f6fa0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:24.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8978" for this suite. • [SLOW TEST:10.300 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3383,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:24.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 5 00:41:25.157: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 5 00:41:27.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:41:29.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236085, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:41:32.310: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:41:32.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:33.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4476" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.066 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":201,"skipped":3390,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:33.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:41:33.719: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:40.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1959" for this suite. • [SLOW TEST:6.456 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":202,"skipped":3390,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:40.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-2304 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2304 to expose endpoints map[] May 5 00:41:40.158: INFO: Get endpoints failed (3.381193ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 5 00:41:41.161: INFO: successfully validated that service multi-endpoint-test in namespace services-2304 exposes endpoints map[] (1.007057249s elapsed) STEP: Creating pod pod1 in namespace services-2304 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2304 to expose endpoints map[pod1:[100]] May 5 00:41:44.263: INFO: successfully validated that service multi-endpoint-test in namespace services-2304 exposes endpoints map[pod1:[100]] (3.094011597s elapsed) STEP: Creating pod pod2 in namespace services-2304 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2304 to expose endpoints map[pod1:[100] pod2:[101]] May 5 00:41:48.671: INFO: successfully validated that service multi-endpoint-test in namespace services-2304 exposes endpoints map[pod1:[100] pod2:[101]] (4.401753286s elapsed) STEP: Deleting pod pod1 in namespace services-2304 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2304 to expose endpoints map[pod2:[101]] May 5 00:41:48.758: INFO: successfully validated that service multi-endpoint-test in namespace services-2304 exposes endpoints map[pod2:[101]] (70.717108ms elapsed) STEP: Deleting pod pod2 in namespace services-2304 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2304 to expose endpoints map[] May 5 00:41:49.821: INFO: successfully validated that service multi-endpoint-test in namespace services-2304 exposes endpoints map[] (1.044655019s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:49.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2304" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.855 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":203,"skipped":3398,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:49.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 5 00:41:50.019: INFO: Created pod &Pod{ObjectMeta:{dns-639 dns-639 /api/v1/namespaces/dns-639/pods/dns-639 809e99a5-7e20-4c51-9a15-4bfa35a94142 1533513 0 2020-05-05 00:41:50 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-05 00:41:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b4p7x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b4p7x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b4p7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 00:41:50.034: INFO: The status of Pod dns-639 is Pending, waiting for it to be Running (with Ready = true) May 5 00:41:52.038: INFO: The status of Pod dns-639 is Pending, waiting for it to be Running (with Ready = true) May 5 00:41:54.038: INFO: The status of Pod dns-639 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 5 00:41:54.038: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-639 PodName:dns-639 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:41:54.038: INFO: >>> kubeConfig: /root/.kube/config I0505 00:41:54.069624 7 log.go:172] (0xc0037b62c0) (0xc0024b9400) Create stream I0505 00:41:54.069652 7 log.go:172] (0xc0037b62c0) (0xc0024b9400) Stream added, broadcasting: 1 I0505 00:41:54.071306 7 log.go:172] (0xc0037b62c0) Reply frame received for 1 I0505 00:41:54.071352 7 log.go:172] (0xc0037b62c0) (0xc0017c6000) Create stream I0505 00:41:54.071376 7 log.go:172] (0xc0037b62c0) (0xc0017c6000) Stream added, broadcasting: 3 I0505 00:41:54.072342 7 log.go:172] (0xc0037b62c0) Reply frame received for 3 I0505 00:41:54.072404 7 log.go:172] (0xc0037b62c0) (0xc002cc8000) Create stream I0505 00:41:54.072421 7 log.go:172] (0xc0037b62c0) (0xc002cc8000) Stream added, broadcasting: 5 I0505 00:41:54.073404 7 log.go:172] (0xc0037b62c0) Reply frame received for 5 I0505 00:41:54.159714 7 log.go:172] (0xc0037b62c0) Data frame received for 3 I0505 00:41:54.159760 7 log.go:172] (0xc0017c6000) (3) Data frame handling I0505 00:41:54.159796 7 log.go:172] (0xc0017c6000) (3) Data frame sent I0505 00:41:54.160527 7 log.go:172] (0xc0037b62c0) Data frame received for 5 I0505 00:41:54.160558 7 log.go:172] (0xc0037b62c0) Data frame received for 3 I0505 00:41:54.160569 7 log.go:172] (0xc0017c6000) (3) Data frame handling I0505 00:41:54.160616 7 log.go:172] (0xc002cc8000) (5) Data frame handling I0505 00:41:54.162728 7 log.go:172] (0xc0037b62c0) Data frame received for 1 I0505 00:41:54.162755 7 log.go:172] (0xc0024b9400) (1) Data frame handling I0505 00:41:54.162780 7 log.go:172] (0xc0024b9400) (1) Data frame sent I0505 00:41:54.162795 7 log.go:172] (0xc0037b62c0) (0xc0024b9400) Stream removed, broadcasting: 1 I0505 00:41:54.162811 7 log.go:172] (0xc0037b62c0) Go away received I0505 00:41:54.163013 7 log.go:172] (0xc0037b62c0) (0xc0024b9400) Stream removed, broadcasting: 1 I0505 00:41:54.163045 7 log.go:172] (0xc0037b62c0) (0xc0017c6000) Stream removed, broadcasting: 3 I0505 00:41:54.163065 7 log.go:172] (0xc0037b62c0) (0xc002cc8000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 5 00:41:54.163: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-639 PodName:dns-639 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:41:54.163: INFO: >>> kubeConfig: /root/.kube/config I0505 00:41:54.194156 7 log.go:172] (0xc003848370) (0xc002cc8640) Create stream I0505 00:41:54.194187 7 log.go:172] (0xc003848370) (0xc002cc8640) Stream added, broadcasting: 1 I0505 00:41:54.199050 7 log.go:172] (0xc003848370) Reply frame received for 1 I0505 00:41:54.199103 7 log.go:172] (0xc003848370) (0xc002cc8780) Create stream I0505 00:41:54.199125 7 log.go:172] (0xc003848370) (0xc002cc8780) Stream added, broadcasting: 3 I0505 00:41:54.200579 7 log.go:172] (0xc003848370) Reply frame received for 3 I0505 00:41:54.200616 7 log.go:172] (0xc003848370) (0xc00133bae0) Create stream I0505 00:41:54.200628 7 log.go:172] (0xc003848370) (0xc00133bae0) Stream added, broadcasting: 5 I0505 00:41:54.202163 7 log.go:172] (0xc003848370) Reply frame received for 5 I0505 00:41:54.274251 7 log.go:172] (0xc003848370) Data frame received for 3 I0505 00:41:54.274281 7 log.go:172] (0xc002cc8780) (3) Data frame handling I0505 00:41:54.274302 7 log.go:172] (0xc002cc8780) (3) Data frame sent I0505 00:41:54.275399 7 log.go:172] (0xc003848370) Data frame received for 3 I0505 00:41:54.275432 7 log.go:172] (0xc002cc8780) (3) Data frame handling I0505 00:41:54.275457 7 log.go:172] (0xc003848370) Data frame received for 5 I0505 00:41:54.275476 7 log.go:172] (0xc00133bae0) (5) Data frame handling I0505 00:41:54.276794 7 log.go:172] (0xc003848370) Data frame received for 1 I0505 00:41:54.276815 7 log.go:172] (0xc002cc8640) (1) Data frame handling I0505 00:41:54.276835 7 log.go:172] (0xc002cc8640) (1) Data frame sent I0505 00:41:54.276846 7 log.go:172] (0xc003848370) (0xc002cc8640) Stream removed, broadcasting: 1 I0505 00:41:54.276861 7 log.go:172] (0xc003848370) Go away received I0505 00:41:54.276968 7 log.go:172] (0xc003848370) (0xc002cc8640) Stream removed, broadcasting: 1 I0505 00:41:54.276998 7 log.go:172] (0xc003848370) (0xc002cc8780) Stream removed, broadcasting: 3 I0505 00:41:54.277026 7 log.go:172] (0xc003848370) (0xc00133bae0) Stream removed, broadcasting: 5 May 5 00:41:54.277: INFO: Deleting pod dns-639... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:41:54.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-639" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":204,"skipped":3417,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:41:54.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:41:54.676: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 5 00:41:57.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 create -f -' May 5 00:42:01.001: INFO: stderr: "" May 5 00:42:01.001: INFO: stdout: "e2e-test-crd-publish-openapi-7800-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 5 00:42:01.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 delete e2e-test-crd-publish-openapi-7800-crds test-foo' May 5 00:42:01.125: INFO: stderr: "" May 5 00:42:01.125: INFO: stdout: "e2e-test-crd-publish-openapi-7800-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 5 00:42:01.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 apply -f -' May 5 00:42:01.406: INFO: stderr: "" May 5 00:42:01.406: INFO: stdout: "e2e-test-crd-publish-openapi-7800-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 5 00:42:01.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 delete e2e-test-crd-publish-openapi-7800-crds test-foo' May 5 00:42:01.523: INFO: stderr: "" May 5 00:42:01.523: INFO: stdout: "e2e-test-crd-publish-openapi-7800-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 5 00:42:01.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 create -f -' May 5 00:42:01.762: INFO: rc: 1 May 5 00:42:01.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 apply -f -' May 5 00:42:01.988: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 5 00:42:01.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 create -f -' May 5 00:42:02.253: INFO: rc: 1 May 5 00:42:02.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3990 apply -f -' May 5 00:42:02.517: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 5 00:42:02.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7800-crds' May 5 00:42:02.758: INFO: stderr: "" May 5 00:42:02.758: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7800-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 5 00:42:02.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7800-crds.metadata' May 5 00:42:02.999: INFO: stderr: "" May 5 00:42:02.999: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7800-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 5 00:42:03.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7800-crds.spec' May 5 00:42:03.316: INFO: stderr: "" May 5 00:42:03.316: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7800-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 5 00:42:03.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7800-crds.spec.bars' May 5 00:42:03.560: INFO: stderr: "" May 5 00:42:03.560: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7800-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 5 00:42:03.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7800-crds.spec.bars2' May 5 00:42:03.795: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:42:05.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3990" for this suite. • [SLOW TEST:11.409 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":205,"skipped":3436,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:42:05.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:42:05.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100" in namespace "projected-6389" to be "Succeeded or Failed" May 5 00:42:05.822: INFO: Pod "downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100": Phase="Pending", Reason="", readiness=false. Elapsed: 15.432642ms May 5 00:42:07.825: INFO: Pod "downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01891672s May 5 00:42:09.829: INFO: Pod "downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023078329s STEP: Saw pod success May 5 00:42:09.829: INFO: Pod "downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100" satisfied condition "Succeeded or Failed" May 5 00:42:09.832: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100 container client-container: STEP: delete the pod May 5 00:42:09.899: INFO: Waiting for pod downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100 to disappear May 5 00:42:09.909: INFO: Pod downwardapi-volume-9d0cc369-3f74-4945-acd7-e20ef8e8f100 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:42:09.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6389" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3437,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:42:09.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-7555 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7555 to expose endpoints map[] May 5 00:42:10.059: INFO: Get endpoints failed (8.279322ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 5 00:42:11.063: INFO: successfully validated that service endpoint-test2 in namespace services-7555 exposes endpoints map[] (1.012300724s elapsed) STEP: Creating pod pod1 in namespace services-7555 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7555 to expose endpoints map[pod1:[80]] May 5 00:42:14.188: INFO: successfully validated that service endpoint-test2 in namespace services-7555 exposes endpoints map[pod1:[80]] (3.117205111s elapsed) STEP: Creating pod pod2 in namespace services-7555 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7555 to expose endpoints map[pod1:[80] pod2:[80]] May 5 00:42:18.285: INFO: successfully validated that service endpoint-test2 in namespace services-7555 exposes endpoints map[pod1:[80] pod2:[80]] (4.090874486s elapsed) STEP: Deleting pod pod1 in namespace services-7555 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7555 to expose endpoints map[pod2:[80]] May 5 00:42:19.379: INFO: successfully validated that service endpoint-test2 in namespace services-7555 exposes endpoints map[pod2:[80]] (1.090367042s elapsed) STEP: Deleting pod pod2 in namespace services-7555 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7555 to expose endpoints map[] May 5 00:42:20.484: INFO: successfully validated that service endpoint-test2 in namespace services-7555 exposes endpoints map[] (1.099755071s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:42:20.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7555" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.653 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":207,"skipped":3440,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:42:20.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:42:20.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5656" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":208,"skipped":3459,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:42:20.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-977 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 00:42:20.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 5 00:42:20.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 00:42:22.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 00:42:24.847: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:26.847: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:28.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:30.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:32.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:34.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:36.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:38.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:40.848: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:42:42.848: INFO: The status of Pod netserver-0 is Running (Ready = true) May 5 00:42:42.852: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 5 00:42:46.949: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.143:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-977 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:42:46.949: INFO: >>> kubeConfig: /root/.kube/config I0505 00:42:46.988960 7 log.go:172] (0xc005f346e0) (0xc00102e1e0) Create stream I0505 00:42:46.988992 7 log.go:172] (0xc005f346e0) (0xc00102e1e0) Stream added, broadcasting: 1 I0505 00:42:46.994630 7 log.go:172] (0xc005f346e0) Reply frame received for 1 I0505 00:42:46.994685 7 log.go:172] (0xc005f346e0) (0xc0014ec140) Create stream I0505 00:42:46.994695 7 log.go:172] (0xc005f346e0) (0xc0014ec140) Stream added, broadcasting: 3 I0505 00:42:46.996143 7 log.go:172] (0xc005f346e0) Reply frame received for 3 I0505 00:42:46.996189 7 log.go:172] (0xc005f346e0) (0xc00102e280) Create stream I0505 00:42:46.996209 7 log.go:172] (0xc005f346e0) (0xc00102e280) Stream added, broadcasting: 5 I0505 00:42:46.997739 7 log.go:172] (0xc005f346e0) Reply frame received for 5 I0505 00:42:47.080746 7 log.go:172] (0xc005f346e0) Data frame received for 5 I0505 00:42:47.080772 7 log.go:172] (0xc00102e280) (5) Data frame handling I0505 00:42:47.080829 7 log.go:172] (0xc005f346e0) Data frame received for 3 I0505 00:42:47.080864 7 log.go:172] (0xc0014ec140) (3) Data frame handling I0505 00:42:47.080900 7 log.go:172] (0xc0014ec140) (3) Data frame sent I0505 00:42:47.080914 7 log.go:172] (0xc005f346e0) Data frame received for 3 I0505 00:42:47.080933 7 log.go:172] (0xc0014ec140) (3) Data frame handling I0505 00:42:47.082665 7 log.go:172] (0xc005f346e0) Data frame received for 1 I0505 00:42:47.082697 7 log.go:172] (0xc00102e1e0) (1) Data frame handling I0505 00:42:47.082728 7 log.go:172] (0xc00102e1e0) (1) Data frame sent I0505 00:42:47.082746 7 log.go:172] (0xc005f346e0) (0xc00102e1e0) Stream removed, broadcasting: 1 I0505 00:42:47.082768 7 log.go:172] (0xc005f346e0) Go away received I0505 00:42:47.082860 7 log.go:172] (0xc005f346e0) (0xc00102e1e0) Stream removed, broadcasting: 1 I0505 00:42:47.082877 7 log.go:172] (0xc005f346e0) (0xc0014ec140) Stream removed, broadcasting: 3 I0505 00:42:47.082888 7 log.go:172] (0xc005f346e0) (0xc00102e280) Stream removed, broadcasting: 5 May 5 00:42:47.082: INFO: Found all expected endpoints: [netserver-0] May 5 00:42:47.086: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.48:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-977 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:42:47.086: INFO: >>> kubeConfig: /root/.kube/config I0505 00:42:47.119587 7 log.go:172] (0xc002a9cb00) (0xc0014ecc80) Create stream I0505 00:42:47.119615 7 log.go:172] (0xc002a9cb00) (0xc0014ecc80) Stream added, broadcasting: 1 I0505 00:42:47.121976 7 log.go:172] (0xc002a9cb00) Reply frame received for 1 I0505 00:42:47.122030 7 log.go:172] (0xc002a9cb00) (0xc002a4bae0) Create stream I0505 00:42:47.122054 7 log.go:172] (0xc002a9cb00) (0xc002a4bae0) Stream added, broadcasting: 3 I0505 00:42:47.123290 7 log.go:172] (0xc002a9cb00) Reply frame received for 3 I0505 00:42:47.123341 7 log.go:172] (0xc002a9cb00) (0xc00102e320) Create stream I0505 00:42:47.123358 7 log.go:172] (0xc002a9cb00) (0xc00102e320) Stream added, broadcasting: 5 I0505 00:42:47.124425 7 log.go:172] (0xc002a9cb00) Reply frame received for 5 I0505 00:42:47.188758 7 log.go:172] (0xc002a9cb00) Data frame received for 3 I0505 00:42:47.188795 7 log.go:172] (0xc002a4bae0) (3) Data frame handling I0505 00:42:47.188815 7 log.go:172] (0xc002a4bae0) (3) Data frame sent I0505 00:42:47.188824 7 log.go:172] (0xc002a9cb00) Data frame received for 3 I0505 00:42:47.188836 7 log.go:172] (0xc002a4bae0) (3) Data frame handling I0505 00:42:47.188899 7 log.go:172] (0xc002a9cb00) Data frame received for 5 I0505 00:42:47.188922 7 log.go:172] (0xc00102e320) (5) Data frame handling I0505 00:42:47.191442 7 log.go:172] (0xc002a9cb00) Data frame received for 1 I0505 00:42:47.191455 7 log.go:172] (0xc0014ecc80) (1) Data frame handling I0505 00:42:47.191466 7 log.go:172] (0xc0014ecc80) (1) Data frame sent I0505 00:42:47.191476 7 log.go:172] (0xc002a9cb00) (0xc0014ecc80) Stream removed, broadcasting: 1 I0505 00:42:47.191496 7 log.go:172] (0xc002a9cb00) Go away received I0505 00:42:47.191665 7 log.go:172] (0xc002a9cb00) (0xc0014ecc80) Stream removed, broadcasting: 1 I0505 00:42:47.191688 7 log.go:172] (0xc002a9cb00) (0xc002a4bae0) Stream removed, broadcasting: 3 I0505 00:42:47.191702 7 log.go:172] (0xc002a9cb00) (0xc00102e320) Stream removed, broadcasting: 5 May 5 00:42:47.191: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:42:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-977" for this suite. • [SLOW TEST:26.512 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:42:47.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3338 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-3338 May 5 00:42:47.342: INFO: Found 0 stateful pods, waiting for 1 May 5 00:42:57.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 00:42:57.395: INFO: Deleting all statefulset in ns statefulset-3338 May 5 00:42:57.439: INFO: Scaling statefulset ss to 0 May 5 00:43:17.514: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:43:17.516: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:17.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3338" for this suite. • [SLOW TEST:30.329 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":210,"skipped":3510,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:17.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0505 00:43:30.082626 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 00:43:30.082: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:30.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9105" for this suite. • [SLOW TEST:12.864 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":211,"skipped":3532,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:30.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 00:43:30.574: INFO: Waiting up to 5m0s for pod "pod-9674af4d-19a6-4b40-9cf3-79d69be931fd" in namespace "emptydir-1328" to be "Succeeded or Failed" May 5 00:43:30.625: INFO: Pod "pod-9674af4d-19a6-4b40-9cf3-79d69be931fd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.41246ms May 5 00:43:32.644: INFO: Pod "pod-9674af4d-19a6-4b40-9cf3-79d69be931fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070096321s May 5 00:43:34.649: INFO: Pod "pod-9674af4d-19a6-4b40-9cf3-79d69be931fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074472398s STEP: Saw pod success May 5 00:43:34.649: INFO: Pod "pod-9674af4d-19a6-4b40-9cf3-79d69be931fd" satisfied condition "Succeeded or Failed" May 5 00:43:34.654: INFO: Trying to get logs from node latest-worker pod pod-9674af4d-19a6-4b40-9cf3-79d69be931fd container test-container: STEP: delete the pod May 5 00:43:34.715: INFO: Waiting for pod pod-9674af4d-19a6-4b40-9cf3-79d69be931fd to disappear May 5 00:43:34.743: INFO: Pod pod-9674af4d-19a6-4b40-9cf3-79d69be931fd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:34.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1328" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3536,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:34.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:43:34.864: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9" in namespace "security-context-test-1701" to be "Succeeded or Failed" May 5 00:43:34.878: INFO: Pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.401724ms May 5 00:43:37.157: INFO: Pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292667577s May 5 00:43:39.161: INFO: Pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296553431s May 5 00:43:41.166: INFO: Pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301354126s May 5 00:43:41.166: INFO: Pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9" satisfied condition "Succeeded or Failed" May 5 00:43:41.189: INFO: Got logs for pod "busybox-privileged-false-57f333e8-b1c1-4105-9a89-9835bd55c2b9": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:41.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1701" for this suite. • [SLOW TEST:6.446 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3544,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:41.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:43:41.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8948' May 5 00:43:41.670: INFO: stderr: "" May 5 00:43:41.670: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 5 00:43:41.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8948' May 5 00:43:41.983: INFO: stderr: "" May 5 00:43:41.983: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 00:43:42.987: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:43:42.987: INFO: Found 0 / 1 May 5 00:43:43.987: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:43:43.987: INFO: Found 0 / 1 May 5 00:43:44.988: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:43:44.988: INFO: Found 1 / 1 May 5 00:43:44.988: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 00:43:44.991: INFO: Selector matched 1 pods for map[app:agnhost] May 5 00:43:44.991: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 00:43:44.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-mrbnn --namespace=kubectl-8948' May 5 00:43:45.122: INFO: stderr: "" May 5 00:43:45.122: INFO: stdout: "Name: agnhost-master-mrbnn\nNamespace: kubectl-8948\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 05 May 2020 00:43:41 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.152\nIPs:\n IP: 10.244.1.152\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://254b124ff835243082fcec5bbf44505b76d8785e363d45edf90afb71ecfdb90c\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 05 May 2020 00:43:44 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nvz8k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nvz8k:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nvz8k\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-8948/agnhost-master-mrbnn to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 5 00:43:45.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8948' May 5 00:43:45.261: INFO: stderr: "" May 5 00:43:45.261: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8948\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-mrbnn\n" May 5 00:43:45.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8948' May 5 00:43:45.363: INFO: stderr: "" May 5 00:43:45.363: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8948\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.104.40.69\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.152:6379\nSession Affinity: None\nEvents: \n" May 5 00:43:45.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 5 00:43:45.493: INFO: stderr: "" May 5 00:43:45.494: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 05 May 2020 00:43:40 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 05 May 2020 00:43:28 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 05 May 2020 00:43:28 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 05 May 2020 00:43:28 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 05 May 2020 00:43:28 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d14h\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d14h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d14h\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d14h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d14h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d14h\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d14h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d14h\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d14h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 5 00:43:45.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-8948' May 5 00:43:45.608: INFO: stderr: "" May 5 00:43:45.608: INFO: stdout: "Name: kubectl-8948\nLabels: e2e-framework=kubectl\n e2e-run=489944c9-0611-4199-9228-6b72f20447c1\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:45.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8948" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":214,"skipped":3556,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:45.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-e28a9c98-0414-4842-844a-118a9348dfca STEP: Creating a pod to test consume configMaps May 5 00:43:45.728: INFO: Waiting up to 5m0s for pod "pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3" in namespace "configmap-1087" to be "Succeeded or Failed" May 5 00:43:45.768: INFO: Pod "pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3": Phase="Pending", Reason="", readiness=false. Elapsed: 40.514396ms May 5 00:43:47.772: INFO: Pod "pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044117259s May 5 00:43:49.776: INFO: Pod "pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048737849s STEP: Saw pod success May 5 00:43:49.777: INFO: Pod "pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3" satisfied condition "Succeeded or Failed" May 5 00:43:49.780: INFO: Trying to get logs from node latest-worker pod pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3 container configmap-volume-test: STEP: delete the pod May 5 00:43:49.818: INFO: Waiting for pod pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3 to disappear May 5 00:43:49.830: INFO: Pod pod-configmaps-79707f91-ef62-43ac-8a50-3029c91c21b3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:49.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1087" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:49.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-afe1d928-3bc7-44c4-83f0-5017160d044e STEP: Creating secret with name secret-projected-all-test-volume-cd43bb35-5993-4672-96d7-0ee709a0e3d0 STEP: Creating a pod to test Check all projections for projected volume plugin May 5 00:43:49.957: INFO: Waiting up to 5m0s for pod "projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde" in namespace "projected-3943" to be "Succeeded or Failed" May 5 00:43:49.968: INFO: Pod "projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde": Phase="Pending", Reason="", readiness=false. Elapsed: 11.519799ms May 5 00:43:51.974: INFO: Pod "projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017326817s May 5 00:43:53.979: INFO: Pod "projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022273446s STEP: Saw pod success May 5 00:43:53.979: INFO: Pod "projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde" satisfied condition "Succeeded or Failed" May 5 00:43:53.983: INFO: Trying to get logs from node latest-worker pod projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde container projected-all-volume-test: STEP: delete the pod May 5 00:43:54.171: INFO: Waiting for pod projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde to disappear May 5 00:43:54.312: INFO: Pod projected-volume-21a77dc9-c0e3-405c-bce2-6600bc7c4fde no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:54.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3943" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:54.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:43:54.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444" in namespace "projected-9099" to be "Succeeded or Failed" May 5 00:43:54.540: INFO: Pod "downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414421ms May 5 00:43:56.588: INFO: Pod "downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050777798s May 5 00:43:58.593: INFO: Pod "downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056206058s STEP: Saw pod success May 5 00:43:58.593: INFO: Pod "downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444" satisfied condition "Succeeded or Failed" May 5 00:43:58.596: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444 container client-container: STEP: delete the pod May 5 00:43:58.714: INFO: Waiting for pod downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444 to disappear May 5 00:43:58.718: INFO: Pod downwardapi-volume-d04c33d7-08b7-45dd-91d7-ed86d173e444 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:43:58.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9099" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3671,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:43:58.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 5 00:43:58.842: INFO: Waiting up to 5m0s for pod "pod-f934b8ba-8cc4-438f-8ebf-be7d82522981" in namespace "emptydir-1269" to be "Succeeded or Failed" May 5 00:43:58.849: INFO: Pod "pod-f934b8ba-8cc4-438f-8ebf-be7d82522981": Phase="Pending", Reason="", readiness=false. Elapsed: 7.711909ms May 5 00:44:00.917: INFO: Pod "pod-f934b8ba-8cc4-438f-8ebf-be7d82522981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074984584s May 5 00:44:02.922: INFO: Pod "pod-f934b8ba-8cc4-438f-8ebf-be7d82522981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079921559s STEP: Saw pod success May 5 00:44:02.922: INFO: Pod "pod-f934b8ba-8cc4-438f-8ebf-be7d82522981" satisfied condition "Succeeded or Failed" May 5 00:44:02.924: INFO: Trying to get logs from node latest-worker pod pod-f934b8ba-8cc4-438f-8ebf-be7d82522981 container test-container: STEP: delete the pod May 5 00:44:03.164: INFO: Waiting for pod pod-f934b8ba-8cc4-438f-8ebf-be7d82522981 to disappear May 5 00:44:03.221: INFO: Pod pod-f934b8ba-8cc4-438f-8ebf-be7d82522981 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:44:03.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1269" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:44:03.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:44:03.404: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 5 00:44:03.418: INFO: Number of nodes with available pods: 0 May 5 00:44:03.418: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 5 00:44:03.481: INFO: Number of nodes with available pods: 0 May 5 00:44:03.481: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:04.485: INFO: Number of nodes with available pods: 0 May 5 00:44:04.485: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:05.486: INFO: Number of nodes with available pods: 0 May 5 00:44:05.486: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:06.486: INFO: Number of nodes with available pods: 0 May 5 00:44:06.486: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:07.484: INFO: Number of nodes with available pods: 1 May 5 00:44:07.484: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 5 00:44:07.517: INFO: Number of nodes with available pods: 1 May 5 00:44:07.517: INFO: Number of running nodes: 0, number of available pods: 1 May 5 00:44:08.521: INFO: Number of nodes with available pods: 0 May 5 00:44:08.521: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 5 00:44:08.566: INFO: Number of nodes with available pods: 0 May 5 00:44:08.566: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:09.571: INFO: Number of nodes with available pods: 0 May 5 00:44:09.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:10.572: INFO: Number of nodes with available pods: 0 May 5 00:44:10.572: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:11.571: INFO: Number of nodes with available pods: 0 May 5 00:44:11.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:12.571: INFO: Number of nodes with available pods: 0 May 5 00:44:12.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:13.571: INFO: Number of nodes with available pods: 0 May 5 00:44:13.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:14.571: INFO: Number of nodes with available pods: 0 May 5 00:44:14.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:15.571: INFO: Number of nodes with available pods: 0 May 5 00:44:15.571: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:16.593: INFO: Number of nodes with available pods: 0 May 5 00:44:16.593: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:17.579: INFO: Number of nodes with available pods: 0 May 5 00:44:17.579: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:44:18.571: INFO: Number of nodes with available pods: 1 May 5 00:44:18.571: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1425, will wait for the garbage collector to delete the pods May 5 00:44:18.637: INFO: Deleting DaemonSet.extensions daemon-set took: 6.447942ms May 5 00:44:18.738: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24521ms May 5 00:44:25.342: INFO: Number of nodes with available pods: 0 May 5 00:44:25.342: INFO: Number of running nodes: 0, number of available pods: 0 May 5 00:44:25.345: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1425/daemonsets","resourceVersion":"1534693"},"items":null} May 5 00:44:25.348: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1425/pods","resourceVersion":"1534693"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:44:25.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1425" for this suite. • [SLOW TEST:22.195 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":219,"skipped":3699,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:44:25.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 5 00:44:30.145: INFO: Successfully updated pod "labelsupdated7e68416-1519-45f0-aed6-15cd512008b3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:44:34.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2352" for this suite. • [SLOW TEST:8.760 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:44:34.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:44:35.187: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:44:37.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:44:39.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236275, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:44:42.233: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:44:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3390" for this suite. STEP: Destroying namespace "webhook-3390-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.314 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":221,"skipped":3737,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:44:52.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:44:52.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6671" for this suite. STEP: Destroying namespace "nspatchtest-0e4dfd40-656f-47e4-809a-395b8ff2ac56-6927" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":222,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:44:52.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-428fe77b-a877-4a90-8c68-8721b234f90b in namespace container-probe-3192 May 5 00:44:57.147: INFO: Started pod liveness-428fe77b-a877-4a90-8c68-8721b234f90b in namespace container-probe-3192 STEP: checking the pod's current state and verifying that restartCount is present May 5 00:44:57.150: INFO: Initial restart count of pod liveness-428fe77b-a877-4a90-8c68-8721b234f90b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:48:57.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3192" for this suite. • [SLOW TEST:244.699 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:48:57.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 5 00:48:57.494: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 5 00:48:57.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:58.006: INFO: stderr: "" May 5 00:48:58.006: INFO: stdout: "service/agnhost-slave created\n" May 5 00:48:58.006: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 5 00:48:58.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:58.458: INFO: stderr: "" May 5 00:48:58.458: INFO: stdout: "service/agnhost-master created\n" May 5 00:48:58.459: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 5 00:48:58.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:58.839: INFO: stderr: "" May 5 00:48:58.839: INFO: stdout: "service/frontend created\n" May 5 00:48:58.839: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 5 00:48:58.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:59.121: INFO: stderr: "" May 5 00:48:59.121: INFO: stdout: "deployment.apps/frontend created\n" May 5 00:48:59.121: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 00:48:59.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:59.458: INFO: stderr: "" May 5 00:48:59.458: INFO: stdout: "deployment.apps/agnhost-master created\n" May 5 00:48:59.458: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 00:48:59.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4162' May 5 00:48:59.766: INFO: stderr: "" May 5 00:48:59.766: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 5 00:48:59.766: INFO: Waiting for all frontend pods to be Running. May 5 00:49:09.816: INFO: Waiting for frontend to serve content. May 5 00:49:09.828: INFO: Trying to add a new entry to the guestbook. May 5 00:49:09.839: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 5 00:49:09.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:10.010: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:10.010: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 5 00:49:10.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:10.233: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:10.233: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 00:49:10.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:10.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:10.404: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 00:49:10.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:10.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:10.523: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 00:49:10.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:10.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:10.630: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 00:49:10.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4162' May 5 00:49:11.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 00:49:11.026: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:49:11.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4162" for this suite. • [SLOW TEST:13.703 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":224,"skipped":3796,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:49:11.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:49:12.770: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:49:14.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236553, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:49:16.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236553, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236552, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:49:19.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:49:34.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6328" for this suite. STEP: Destroying namespace "webhook-6328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":225,"skipped":3815,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:49:34.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1314 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-1314 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1314 May 5 00:49:34.676: INFO: Found 0 stateful pods, waiting for 1 May 5 00:49:44.681: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 5 00:49:44.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:49:44.976: INFO: stderr: "I0505 00:49:44.812126 3177 log.go:172] (0xc000bc9340) (0xc0007165a0) Create stream\nI0505 00:49:44.812169 3177 log.go:172] (0xc000bc9340) (0xc0007165a0) Stream added, broadcasting: 1\nI0505 00:49:44.815738 3177 log.go:172] (0xc000bc9340) Reply frame received for 1\nI0505 00:49:44.815781 3177 log.go:172] (0xc000bc9340) (0xc000703360) Create stream\nI0505 00:49:44.815793 3177 log.go:172] (0xc000bc9340) (0xc000703360) Stream added, broadcasting: 3\nI0505 00:49:44.816530 3177 log.go:172] (0xc000bc9340) Reply frame received for 3\nI0505 00:49:44.816555 3177 log.go:172] (0xc000bc9340) (0xc00054a280) Create stream\nI0505 00:49:44.816562 3177 log.go:172] (0xc000bc9340) (0xc00054a280) Stream added, broadcasting: 5\nI0505 00:49:44.817312 3177 log.go:172] (0xc000bc9340) Reply frame received for 5\nI0505 00:49:44.939281 3177 log.go:172] (0xc000bc9340) Data frame received for 5\nI0505 00:49:44.939314 3177 log.go:172] (0xc00054a280) (5) Data frame handling\nI0505 00:49:44.939334 3177 log.go:172] (0xc00054a280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:49:44.968572 3177 log.go:172] (0xc000bc9340) Data frame received for 3\nI0505 00:49:44.968611 3177 log.go:172] (0xc000703360) (3) Data frame handling\nI0505 00:49:44.968637 3177 log.go:172] (0xc000703360) (3) Data frame sent\nI0505 00:49:44.968993 3177 log.go:172] (0xc000bc9340) Data frame received for 5\nI0505 00:49:44.969058 3177 log.go:172] (0xc00054a280) (5) Data frame handling\nI0505 00:49:44.969086 3177 log.go:172] (0xc000bc9340) Data frame received for 3\nI0505 00:49:44.969103 3177 log.go:172] (0xc000703360) (3) Data frame handling\nI0505 00:49:44.970916 3177 log.go:172] (0xc000bc9340) Data frame received for 1\nI0505 00:49:44.970933 3177 log.go:172] (0xc0007165a0) (1) Data frame handling\nI0505 00:49:44.970950 3177 log.go:172] (0xc0007165a0) (1) Data frame sent\nI0505 00:49:44.970963 3177 log.go:172] (0xc000bc9340) (0xc0007165a0) Stream removed, broadcasting: 1\nI0505 00:49:44.971221 3177 log.go:172] (0xc000bc9340) Go away received\nI0505 00:49:44.971545 3177 log.go:172] (0xc000bc9340) (0xc0007165a0) Stream removed, broadcasting: 1\nI0505 00:49:44.971577 3177 log.go:172] (0xc000bc9340) (0xc000703360) Stream removed, broadcasting: 3\nI0505 00:49:44.971600 3177 log.go:172] (0xc000bc9340) (0xc00054a280) Stream removed, broadcasting: 5\n" May 5 00:49:44.977: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:49:44.977: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:49:44.980: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 5 00:49:54.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 00:49:54.984: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:49:55.017: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:49:55.017: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:49:55.017: INFO: May 5 00:49:55.017: INFO: StatefulSet ss has not reached scale 3, at 1 May 5 00:49:56.023: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975090315s May 5 00:49:57.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969432198s May 5 00:49:58.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.799796797s May 5 00:49:59.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.501460621s May 5 00:50:00.502: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.496707538s May 5 00:50:01.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.49035974s May 5 00:50:02.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.4849261s May 5 00:50:03.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.478974162s May 5 00:50:04.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 474.106729ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1314 May 5 00:50:05.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:50:05.765: INFO: stderr: "I0505 00:50:05.671615 3199 log.go:172] (0xc00003a4d0) (0xc000b4c000) Create stream\nI0505 00:50:05.671700 3199 log.go:172] (0xc00003a4d0) (0xc000b4c000) Stream added, broadcasting: 1\nI0505 00:50:05.674173 3199 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0505 00:50:05.674220 3199 log.go:172] (0xc00003a4d0) (0xc000696e60) Create stream\nI0505 00:50:05.674235 3199 log.go:172] (0xc00003a4d0) (0xc000696e60) Stream added, broadcasting: 3\nI0505 00:50:05.675087 3199 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0505 00:50:05.675109 3199 log.go:172] (0xc00003a4d0) (0xc000b4c0a0) Create stream\nI0505 00:50:05.675120 3199 log.go:172] (0xc00003a4d0) (0xc000b4c0a0) Stream added, broadcasting: 5\nI0505 00:50:05.676059 3199 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0505 00:50:05.757979 3199 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0505 00:50:05.758032 3199 log.go:172] (0xc000696e60) (3) Data frame handling\nI0505 00:50:05.758052 3199 log.go:172] (0xc000696e60) (3) Data frame sent\nI0505 00:50:05.758087 3199 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0505 00:50:05.758103 3199 log.go:172] (0xc000b4c0a0) (5) Data frame handling\nI0505 00:50:05.758119 3199 log.go:172] (0xc000b4c0a0) (5) Data frame sent\nI0505 00:50:05.758139 3199 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0505 00:50:05.758155 3199 log.go:172] (0xc000b4c0a0) (5) Data frame handling\nI0505 00:50:05.758173 3199 log.go:172] (0xc00003a4d0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:50:05.758202 3199 log.go:172] (0xc000696e60) (3) Data frame handling\nI0505 00:50:05.759847 3199 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0505 00:50:05.759877 3199 log.go:172] (0xc000b4c000) (1) Data frame handling\nI0505 00:50:05.759903 3199 log.go:172] (0xc000b4c000) (1) Data frame sent\nI0505 00:50:05.759990 3199 log.go:172] (0xc00003a4d0) (0xc000b4c000) Stream removed, broadcasting: 1\nI0505 00:50:05.760024 3199 log.go:172] (0xc00003a4d0) Go away received\nI0505 00:50:05.760353 3199 log.go:172] (0xc00003a4d0) (0xc000b4c000) Stream removed, broadcasting: 1\nI0505 00:50:05.760376 3199 log.go:172] (0xc00003a4d0) (0xc000696e60) Stream removed, broadcasting: 3\nI0505 00:50:05.760390 3199 log.go:172] (0xc00003a4d0) (0xc000b4c0a0) Stream removed, broadcasting: 5\n" May 5 00:50:05.765: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:50:05.765: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:50:05.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:50:06.002: INFO: stderr: "I0505 00:50:05.923365 3220 log.go:172] (0xc000bad810) (0xc000bfe320) Create stream\nI0505 00:50:05.923415 3220 log.go:172] (0xc000bad810) (0xc000bfe320) Stream added, broadcasting: 1\nI0505 00:50:05.929092 3220 log.go:172] (0xc000bad810) Reply frame received for 1\nI0505 00:50:05.929291 3220 log.go:172] (0xc000bad810) (0xc00054cf00) Create stream\nI0505 00:50:05.929307 3220 log.go:172] (0xc000bad810) (0xc00054cf00) Stream added, broadcasting: 3\nI0505 00:50:05.930733 3220 log.go:172] (0xc000bad810) Reply frame received for 3\nI0505 00:50:05.930767 3220 log.go:172] (0xc000bad810) (0xc000687b80) Create stream\nI0505 00:50:05.930778 3220 log.go:172] (0xc000bad810) (0xc000687b80) Stream added, broadcasting: 5\nI0505 00:50:05.931724 3220 log.go:172] (0xc000bad810) Reply frame received for 5\nI0505 00:50:05.994110 3220 log.go:172] (0xc000bad810) Data frame received for 5\nI0505 00:50:05.994166 3220 log.go:172] (0xc000687b80) (5) Data frame handling\nI0505 00:50:05.994193 3220 log.go:172] (0xc000687b80) (5) Data frame sent\nI0505 00:50:05.994211 3220 log.go:172] (0xc000bad810) Data frame received for 5\nI0505 00:50:05.994229 3220 log.go:172] (0xc000687b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 00:50:05.994253 3220 log.go:172] (0xc000bad810) Data frame received for 3\nI0505 00:50:05.994330 3220 log.go:172] (0xc00054cf00) (3) Data frame handling\nI0505 00:50:05.994389 3220 log.go:172] (0xc00054cf00) (3) Data frame sent\nI0505 00:50:05.994416 3220 log.go:172] (0xc000bad810) Data frame received for 3\nI0505 00:50:05.994450 3220 log.go:172] (0xc00054cf00) (3) Data frame handling\nI0505 00:50:05.995964 3220 log.go:172] (0xc000bad810) Data frame received for 1\nI0505 00:50:05.996003 3220 log.go:172] (0xc000bfe320) (1) Data frame handling\nI0505 00:50:05.996033 3220 log.go:172] (0xc000bfe320) (1) Data frame sent\nI0505 00:50:05.996065 3220 log.go:172] (0xc000bad810) (0xc000bfe320) Stream removed, broadcasting: 1\nI0505 00:50:05.996147 3220 log.go:172] (0xc000bad810) Go away received\nI0505 00:50:05.996491 3220 log.go:172] (0xc000bad810) (0xc000bfe320) Stream removed, broadcasting: 1\nI0505 00:50:05.996516 3220 log.go:172] (0xc000bad810) (0xc00054cf00) Stream removed, broadcasting: 3\nI0505 00:50:05.996541 3220 log.go:172] (0xc000bad810) (0xc000687b80) Stream removed, broadcasting: 5\n" May 5 00:50:06.002: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:50:06.002: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:50:06.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:50:06.218: INFO: stderr: "I0505 00:50:06.142435 3238 log.go:172] (0xc000905970) (0xc000ad43c0) Create stream\nI0505 00:50:06.142503 3238 log.go:172] (0xc000905970) (0xc000ad43c0) Stream added, broadcasting: 1\nI0505 00:50:06.146889 3238 log.go:172] (0xc000905970) Reply frame received for 1\nI0505 00:50:06.146926 3238 log.go:172] (0xc000905970) (0xc0006e45a0) Create stream\nI0505 00:50:06.146934 3238 log.go:172] (0xc000905970) (0xc0006e45a0) Stream added, broadcasting: 3\nI0505 00:50:06.147894 3238 log.go:172] (0xc000905970) Reply frame received for 3\nI0505 00:50:06.147939 3238 log.go:172] (0xc000905970) (0xc000524dc0) Create stream\nI0505 00:50:06.147956 3238 log.go:172] (0xc000905970) (0xc000524dc0) Stream added, broadcasting: 5\nI0505 00:50:06.148895 3238 log.go:172] (0xc000905970) Reply frame received for 5\nI0505 00:50:06.211572 3238 log.go:172] (0xc000905970) Data frame received for 3\nI0505 00:50:06.211603 3238 log.go:172] (0xc0006e45a0) (3) Data frame handling\nI0505 00:50:06.211630 3238 log.go:172] (0xc000905970) Data frame received for 5\nI0505 00:50:06.211664 3238 log.go:172] (0xc000524dc0) (5) Data frame handling\nI0505 00:50:06.211687 3238 log.go:172] (0xc000524dc0) (5) Data frame sent\nI0505 00:50:06.211710 3238 log.go:172] (0xc000905970) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 00:50:06.211768 3238 log.go:172] (0xc000524dc0) (5) Data frame handling\nI0505 00:50:06.211808 3238 log.go:172] (0xc0006e45a0) (3) Data frame sent\nI0505 00:50:06.211831 3238 log.go:172] (0xc000905970) Data frame received for 3\nI0505 00:50:06.211852 3238 log.go:172] (0xc0006e45a0) (3) Data frame handling\nI0505 00:50:06.212892 3238 log.go:172] (0xc000905970) Data frame received for 1\nI0505 00:50:06.212918 3238 log.go:172] (0xc000ad43c0) (1) Data frame handling\nI0505 00:50:06.212942 3238 log.go:172] (0xc000ad43c0) (1) Data frame sent\nI0505 00:50:06.212958 3238 log.go:172] (0xc000905970) (0xc000ad43c0) Stream removed, broadcasting: 1\nI0505 00:50:06.212969 3238 log.go:172] (0xc000905970) Go away received\nI0505 00:50:06.213451 3238 log.go:172] (0xc000905970) (0xc000ad43c0) Stream removed, broadcasting: 1\nI0505 00:50:06.213467 3238 log.go:172] (0xc000905970) (0xc0006e45a0) Stream removed, broadcasting: 3\nI0505 00:50:06.213474 3238 log.go:172] (0xc000905970) (0xc000524dc0) Stream removed, broadcasting: 5\n" May 5 00:50:06.218: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:50:06.218: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:50:06.231: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:50:06.231: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:50:06.231: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 5 00:50:06.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:50:06.437: INFO: stderr: "I0505 00:50:06.367114 3257 log.go:172] (0xc000a7b340) (0xc000b0a780) Create stream\nI0505 00:50:06.367208 3257 log.go:172] (0xc000a7b340) (0xc000b0a780) Stream added, broadcasting: 1\nI0505 00:50:06.375634 3257 log.go:172] (0xc000a7b340) Reply frame received for 1\nI0505 00:50:06.375667 3257 log.go:172] (0xc000a7b340) (0xc0005941e0) Create stream\nI0505 00:50:06.375675 3257 log.go:172] (0xc000a7b340) (0xc0005941e0) Stream added, broadcasting: 3\nI0505 00:50:06.376440 3257 log.go:172] (0xc000a7b340) Reply frame received for 3\nI0505 00:50:06.376465 3257 log.go:172] (0xc000a7b340) (0xc000595180) Create stream\nI0505 00:50:06.376474 3257 log.go:172] (0xc000a7b340) (0xc000595180) Stream added, broadcasting: 5\nI0505 00:50:06.377292 3257 log.go:172] (0xc000a7b340) Reply frame received for 5\nI0505 00:50:06.429837 3257 log.go:172] (0xc000a7b340) Data frame received for 3\nI0505 00:50:06.429880 3257 log.go:172] (0xc0005941e0) (3) Data frame handling\nI0505 00:50:06.429895 3257 log.go:172] (0xc0005941e0) (3) Data frame sent\nI0505 00:50:06.429907 3257 log.go:172] (0xc000a7b340) Data frame received for 3\nI0505 00:50:06.429917 3257 log.go:172] (0xc0005941e0) (3) Data frame handling\nI0505 00:50:06.429964 3257 log.go:172] (0xc000a7b340) Data frame received for 5\nI0505 00:50:06.430002 3257 log.go:172] (0xc000595180) (5) Data frame handling\nI0505 00:50:06.430038 3257 log.go:172] (0xc000595180) (5) Data frame sent\nI0505 00:50:06.430057 3257 log.go:172] (0xc000a7b340) Data frame received for 5\nI0505 00:50:06.430066 3257 log.go:172] (0xc000595180) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:50:06.431369 3257 log.go:172] (0xc000a7b340) Data frame received for 1\nI0505 00:50:06.431392 3257 log.go:172] (0xc000b0a780) (1) Data frame handling\nI0505 00:50:06.431438 3257 log.go:172] (0xc000b0a780) (1) Data frame sent\nI0505 00:50:06.431465 3257 log.go:172] (0xc000a7b340) (0xc000b0a780) Stream removed, broadcasting: 1\nI0505 00:50:06.431623 3257 log.go:172] (0xc000a7b340) Go away received\nI0505 00:50:06.432003 3257 log.go:172] (0xc000a7b340) (0xc000b0a780) Stream removed, broadcasting: 1\nI0505 00:50:06.432031 3257 log.go:172] (0xc000a7b340) (0xc0005941e0) Stream removed, broadcasting: 3\nI0505 00:50:06.432044 3257 log.go:172] (0xc000a7b340) (0xc000595180) Stream removed, broadcasting: 5\n" May 5 00:50:06.437: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:50:06.437: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:50:06.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:50:06.679: INFO: stderr: "I0505 00:50:06.573339 3278 log.go:172] (0xc00096d3f0) (0xc000361c20) Create stream\nI0505 00:50:06.573403 3278 log.go:172] (0xc00096d3f0) (0xc000361c20) Stream added, broadcasting: 1\nI0505 00:50:06.577683 3278 log.go:172] (0xc00096d3f0) Reply frame received for 1\nI0505 00:50:06.577760 3278 log.go:172] (0xc00096d3f0) (0xc0003361e0) Create stream\nI0505 00:50:06.577783 3278 log.go:172] (0xc00096d3f0) (0xc0003361e0) Stream added, broadcasting: 3\nI0505 00:50:06.579568 3278 log.go:172] (0xc00096d3f0) Reply frame received for 3\nI0505 00:50:06.579600 3278 log.go:172] (0xc00096d3f0) (0xc000336960) Create stream\nI0505 00:50:06.579618 3278 log.go:172] (0xc00096d3f0) (0xc000336960) Stream added, broadcasting: 5\nI0505 00:50:06.580748 3278 log.go:172] (0xc00096d3f0) Reply frame received for 5\nI0505 00:50:06.639228 3278 log.go:172] (0xc00096d3f0) Data frame received for 5\nI0505 00:50:06.639258 3278 log.go:172] (0xc000336960) (5) Data frame handling\nI0505 00:50:06.639281 3278 log.go:172] (0xc000336960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:50:06.670656 3278 log.go:172] (0xc00096d3f0) Data frame received for 5\nI0505 00:50:06.670710 3278 log.go:172] (0xc000336960) (5) Data frame handling\nI0505 00:50:06.670738 3278 log.go:172] (0xc00096d3f0) Data frame received for 3\nI0505 00:50:06.670756 3278 log.go:172] (0xc0003361e0) (3) Data frame handling\nI0505 00:50:06.670778 3278 log.go:172] (0xc0003361e0) (3) Data frame sent\nI0505 00:50:06.670797 3278 log.go:172] (0xc00096d3f0) Data frame received for 3\nI0505 00:50:06.670808 3278 log.go:172] (0xc0003361e0) (3) Data frame handling\nI0505 00:50:06.672633 3278 log.go:172] (0xc00096d3f0) Data frame received for 1\nI0505 00:50:06.672661 3278 log.go:172] (0xc000361c20) (1) Data frame handling\nI0505 00:50:06.672689 3278 log.go:172] (0xc000361c20) (1) Data frame sent\nI0505 00:50:06.672714 3278 log.go:172] (0xc00096d3f0) (0xc000361c20) Stream removed, broadcasting: 1\nI0505 00:50:06.672731 3278 log.go:172] (0xc00096d3f0) Go away received\nI0505 00:50:06.673363 3278 log.go:172] (0xc00096d3f0) (0xc000361c20) Stream removed, broadcasting: 1\nI0505 00:50:06.673390 3278 log.go:172] (0xc00096d3f0) (0xc0003361e0) Stream removed, broadcasting: 3\nI0505 00:50:06.673403 3278 log.go:172] (0xc00096d3f0) (0xc000336960) Stream removed, broadcasting: 5\n" May 5 00:50:06.679: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:50:06.679: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:50:06.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1314 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:50:07.015: INFO: stderr: "I0505 00:50:06.888011 3298 log.go:172] (0xc000ac9550) (0xc000c18460) Create stream\nI0505 00:50:06.888094 3298 log.go:172] (0xc000ac9550) (0xc000c18460) Stream added, broadcasting: 1\nI0505 00:50:06.893415 3298 log.go:172] (0xc000ac9550) Reply frame received for 1\nI0505 00:50:06.893490 3298 log.go:172] (0xc000ac9550) (0xc00049cf00) Create stream\nI0505 00:50:06.893519 3298 log.go:172] (0xc000ac9550) (0xc00049cf00) Stream added, broadcasting: 3\nI0505 00:50:06.894887 3298 log.go:172] (0xc000ac9550) Reply frame received for 3\nI0505 00:50:06.894906 3298 log.go:172] (0xc000ac9550) (0xc00053c1e0) Create stream\nI0505 00:50:06.894912 3298 log.go:172] (0xc000ac9550) (0xc00053c1e0) Stream added, broadcasting: 5\nI0505 00:50:06.895887 3298 log.go:172] (0xc000ac9550) Reply frame received for 5\nI0505 00:50:06.960840 3298 log.go:172] (0xc000ac9550) Data frame received for 5\nI0505 00:50:06.960878 3298 log.go:172] (0xc00053c1e0) (5) Data frame handling\nI0505 00:50:06.960902 3298 log.go:172] (0xc00053c1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:50:07.006842 3298 log.go:172] (0xc000ac9550) Data frame received for 3\nI0505 00:50:07.006985 3298 log.go:172] (0xc00049cf00) (3) Data frame handling\nI0505 00:50:07.007046 3298 log.go:172] (0xc00049cf00) (3) Data frame sent\nI0505 00:50:07.007162 3298 log.go:172] (0xc000ac9550) Data frame received for 5\nI0505 00:50:07.007202 3298 log.go:172] (0xc00053c1e0) (5) Data frame handling\nI0505 00:50:07.007305 3298 log.go:172] (0xc000ac9550) Data frame received for 3\nI0505 00:50:07.007334 3298 log.go:172] (0xc00049cf00) (3) Data frame handling\nI0505 00:50:07.009373 3298 log.go:172] (0xc000ac9550) Data frame received for 1\nI0505 00:50:07.009425 3298 log.go:172] (0xc000c18460) (1) Data frame handling\nI0505 00:50:07.009459 3298 log.go:172] (0xc000c18460) (1) Data frame sent\nI0505 00:50:07.009484 3298 log.go:172] (0xc000ac9550) (0xc000c18460) Stream removed, broadcasting: 1\nI0505 00:50:07.009837 3298 log.go:172] (0xc000ac9550) Go away received\nI0505 00:50:07.010105 3298 log.go:172] (0xc000ac9550) (0xc000c18460) Stream removed, broadcasting: 1\nI0505 00:50:07.010129 3298 log.go:172] (0xc000ac9550) (0xc00049cf00) Stream removed, broadcasting: 3\nI0505 00:50:07.010140 3298 log.go:172] (0xc000ac9550) (0xc00053c1e0) Stream removed, broadcasting: 5\n" May 5 00:50:07.016: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:50:07.016: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:50:07.016: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:50:07.040: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 5 00:50:17.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 00:50:17.048: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 5 00:50:17.048: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 5 00:50:17.078: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:17.078: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:17.078: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:17.078: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:17.078: INFO: May 5 00:50:17.078: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 00:50:18.084: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:18.084: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:18.084: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:18.084: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:18.084: INFO: May 5 00:50:18.084: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 00:50:19.090: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:19.090: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:19.090: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:19.090: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:19.090: INFO: May 5 00:50:19.090: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 00:50:20.095: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:20.095: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:20.096: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:20.096: INFO: May 5 00:50:20.096: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 00:50:21.101: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:21.101: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:21.101: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:21.101: INFO: May 5 00:50:21.101: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 00:50:22.106: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:22.106: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:22.106: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:22.106: INFO: May 5 00:50:22.106: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 00:50:23.111: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:23.112: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:23.112: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:23.112: INFO: May 5 00:50:23.112: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 00:50:24.118: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:24.118: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:24.118: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:55 +0000 UTC }] May 5 00:50:24.118: INFO: May 5 00:50:24.118: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 00:50:25.122: INFO: POD NODE PHASE GRACE CONDITIONS May 5 00:50:25.122: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:50:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 00:49:34 +0000 UTC }] May 5 00:50:25.122: INFO: May 5 00:50:25.122: INFO: StatefulSet ss has not reached scale 0, at 1 May 5 00:50:26.128: INFO: Verifying statefulset ss doesn't scale past 0 for another 933.797274ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1314 May 5 00:50:27.131: INFO: Scaling statefulset ss to 0 May 5 00:50:27.141: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 00:50:27.143: INFO: Deleting all statefulset in ns statefulset-1314 May 5 00:50:27.145: INFO: Scaling statefulset ss to 0 May 5 00:50:27.154: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:50:27.156: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:50:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1314" for this suite. • [SLOW TEST:52.647 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":226,"skipped":3817,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:50:27.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:50:32.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2337" for this suite. • [SLOW TEST:6.603 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":227,"skipped":3819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:50:33.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-32a90053-45aa-4b91-ba92-85be1136bdd7 in namespace container-probe-3633 May 5 00:50:38.384: INFO: Started pod liveness-32a90053-45aa-4b91-ba92-85be1136bdd7 in namespace container-probe-3633 STEP: checking the pod's current state and verifying that restartCount is present May 5 00:50:38.388: INFO: Initial restart count of pod liveness-32a90053-45aa-4b91-ba92-85be1136bdd7 is 0 May 5 00:51:00.438: INFO: Restart count of pod container-probe-3633/liveness-32a90053-45aa-4b91-ba92-85be1136bdd7 is now 1 (22.050582914s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:51:00.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3633" for this suite. • [SLOW TEST:26.720 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:51:00.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:51:00.595: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:51:01.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6018" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":229,"skipped":3919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:51:01.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 5 00:51:01.473: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 00:51:01.493: INFO: Waiting for terminating namespaces to be deleted... May 5 00:51:01.496: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 5 00:51:01.500: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:51:01.501: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:51:01.501: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 5 00:51:01.501: INFO: Container kube-proxy ready: true, restart count 0 May 5 00:51:01.501: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 5 00:51:01.505: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:51:01.505: INFO: Container kindnet-cni ready: true, restart count 0 May 5 00:51:01.505: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 5 00:51:01.505: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 5 00:51:01.606: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 5 00:51:01.606: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 5 00:51:01.606: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 5 00:51:01.606: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 5 00:51:01.606: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 5 00:51:01.613: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb.160bfb8983e834d1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7428/filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb.160bfb89ce56a508], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb.160bfb8ab3026794], Reason = [Created], Message = [Created container filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb.160bfb8aceed1902], Reason = [Started], Message = [Started container filler-pod-1b77d766-5c2e-463d-b9ac-0d70180b6cbb] STEP: Considering event: Type = [Normal], Name = [filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e.160bfb89851d866e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7428/filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e.160bfb8a8eb273c3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e.160bfb8aceed18b0], Reason = [Created], Message = [Created container filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e] STEP: Considering event: Type = [Normal], Name = [filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e.160bfb8adfbfe291], Reason = [Started], Message = [Started container filler-pod-a79a262c-2e2e-4eee-9d9f-5515da49602e] STEP: Considering event: Type = [Warning], Name = [additional-pod.160bfb8b63f7cfe5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:51:10.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7428" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:9.353 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":230,"skipped":3965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:51:10.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2785 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 00:51:10.817: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 5 00:51:10.920: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 00:51:13.154: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 00:51:14.924: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:17.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:18.925: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:20.925: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:22.925: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:24.923: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:26.925: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 00:51:28.925: INFO: The status of Pod netserver-0 is Running (Ready = true) May 5 00:51:28.931: INFO: The status of Pod netserver-1 is Running (Ready = false) May 5 00:51:30.936: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 5 00:51:34.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=http&host=10.244.1.162&port=8080&tries=1'] Namespace:pod-network-test-2785 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:51:34.961: INFO: >>> kubeConfig: /root/.kube/config I0505 00:51:34.991158 7 log.go:172] (0xc002a9d1e0) (0xc0012b8820) Create stream I0505 00:51:34.991231 7 log.go:172] (0xc002a9d1e0) (0xc0012b8820) Stream added, broadcasting: 1 I0505 00:51:34.993632 7 log.go:172] (0xc002a9d1e0) Reply frame received for 1 I0505 00:51:34.993666 7 log.go:172] (0xc002a9d1e0) (0xc0028d9360) Create stream I0505 00:51:34.993678 7 log.go:172] (0xc002a9d1e0) (0xc0028d9360) Stream added, broadcasting: 3 I0505 00:51:34.994451 7 log.go:172] (0xc002a9d1e0) Reply frame received for 3 I0505 00:51:34.994480 7 log.go:172] (0xc002a9d1e0) (0xc0014a7a40) Create stream I0505 00:51:34.994494 7 log.go:172] (0xc002a9d1e0) (0xc0014a7a40) Stream added, broadcasting: 5 I0505 00:51:34.995311 7 log.go:172] (0xc002a9d1e0) Reply frame received for 5 I0505 00:51:35.060919 7 log.go:172] (0xc002a9d1e0) Data frame received for 3 I0505 00:51:35.060960 7 log.go:172] (0xc0028d9360) (3) Data frame handling I0505 00:51:35.060989 7 log.go:172] (0xc0028d9360) (3) Data frame sent I0505 00:51:35.061705 7 log.go:172] (0xc002a9d1e0) Data frame received for 3 I0505 00:51:35.061774 7 log.go:172] (0xc0028d9360) (3) Data frame handling I0505 00:51:35.061807 7 log.go:172] (0xc002a9d1e0) Data frame received for 5 I0505 00:51:35.061827 7 log.go:172] (0xc0014a7a40) (5) Data frame handling I0505 00:51:35.063708 7 log.go:172] (0xc002a9d1e0) Data frame received for 1 I0505 00:51:35.063736 7 log.go:172] (0xc0012b8820) (1) Data frame handling I0505 00:51:35.063755 7 log.go:172] (0xc0012b8820) (1) Data frame sent I0505 00:51:35.063891 7 log.go:172] (0xc002a9d1e0) (0xc0012b8820) Stream removed, broadcasting: 1 I0505 00:51:35.063961 7 log.go:172] (0xc002a9d1e0) Go away received I0505 00:51:35.064020 7 log.go:172] (0xc002a9d1e0) (0xc0012b8820) Stream removed, broadcasting: 1 I0505 00:51:35.064064 7 log.go:172] (0xc002a9d1e0) (0xc0028d9360) Stream removed, broadcasting: 3 I0505 00:51:35.064092 7 log.go:172] (0xc002a9d1e0) (0xc0014a7a40) Stream removed, broadcasting: 5 May 5 00:51:35.064: INFO: Waiting for responses: map[] May 5 00:51:35.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=http&host=10.244.2.69&port=8080&tries=1'] Namespace:pod-network-test-2785 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 00:51:35.067: INFO: >>> kubeConfig: /root/.kube/config I0505 00:51:35.103992 7 log.go:172] (0xc002e16f20) (0xc0010b2820) Create stream I0505 00:51:35.104019 7 log.go:172] (0xc002e16f20) (0xc0010b2820) Stream added, broadcasting: 1 I0505 00:51:35.106225 7 log.go:172] (0xc002e16f20) Reply frame received for 1 I0505 00:51:35.106265 7 log.go:172] (0xc002e16f20) (0xc0028d94a0) Create stream I0505 00:51:35.106282 7 log.go:172] (0xc002e16f20) (0xc0028d94a0) Stream added, broadcasting: 3 I0505 00:51:35.107311 7 log.go:172] (0xc002e16f20) Reply frame received for 3 I0505 00:51:35.107369 7 log.go:172] (0xc002e16f20) (0xc0011ea140) Create stream I0505 00:51:35.107395 7 log.go:172] (0xc002e16f20) (0xc0011ea140) Stream added, broadcasting: 5 I0505 00:51:35.108495 7 log.go:172] (0xc002e16f20) Reply frame received for 5 I0505 00:51:35.165056 7 log.go:172] (0xc002e16f20) Data frame received for 3 I0505 00:51:35.165085 7 log.go:172] (0xc0028d94a0) (3) Data frame handling I0505 00:51:35.165287 7 log.go:172] (0xc0028d94a0) (3) Data frame sent I0505 00:51:35.165880 7 log.go:172] (0xc002e16f20) Data frame received for 3 I0505 00:51:35.165913 7 log.go:172] (0xc0028d94a0) (3) Data frame handling I0505 00:51:35.165937 7 log.go:172] (0xc002e16f20) Data frame received for 5 I0505 00:51:35.166006 7 log.go:172] (0xc0011ea140) (5) Data frame handling I0505 00:51:35.167866 7 log.go:172] (0xc002e16f20) Data frame received for 1 I0505 00:51:35.167900 7 log.go:172] (0xc0010b2820) (1) Data frame handling I0505 00:51:35.167923 7 log.go:172] (0xc0010b2820) (1) Data frame sent I0505 00:51:35.167961 7 log.go:172] (0xc002e16f20) (0xc0010b2820) Stream removed, broadcasting: 1 I0505 00:51:35.167987 7 log.go:172] (0xc002e16f20) Go away received I0505 00:51:35.168062 7 log.go:172] (0xc002e16f20) (0xc0010b2820) Stream removed, broadcasting: 1 I0505 00:51:35.168077 7 log.go:172] (0xc002e16f20) (0xc0028d94a0) Stream removed, broadcasting: 3 I0505 00:51:35.168084 7 log.go:172] (0xc002e16f20) (0xc0011ea140) Stream removed, broadcasting: 5 May 5 00:51:35.168: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:51:35.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2785" for this suite. • [SLOW TEST:24.403 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3997,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:51:35.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3762 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 5 00:51:35.277: INFO: Found 0 stateful pods, waiting for 3 May 5 00:51:45.283: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:51:45.283: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:51:45.283: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 5 00:51:55.282: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 00:51:55.282: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 00:51:55.282: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 5 00:51:55.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3762 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:51:55.555: INFO: stderr: "I0505 00:51:55.437666 3318 log.go:172] (0xc000b28000) (0xc0004221e0) Create stream\nI0505 00:51:55.437757 3318 log.go:172] (0xc000b28000) (0xc0004221e0) Stream added, broadcasting: 1\nI0505 00:51:55.439596 3318 log.go:172] (0xc000b28000) Reply frame received for 1\nI0505 00:51:55.439634 3318 log.go:172] (0xc000b28000) (0xc00014fd60) Create stream\nI0505 00:51:55.439647 3318 log.go:172] (0xc000b28000) (0xc00014fd60) Stream added, broadcasting: 3\nI0505 00:51:55.440615 3318 log.go:172] (0xc000b28000) Reply frame received for 3\nI0505 00:51:55.440659 3318 log.go:172] (0xc000b28000) (0xc000846d20) Create stream\nI0505 00:51:55.440675 3318 log.go:172] (0xc000b28000) (0xc000846d20) Stream added, broadcasting: 5\nI0505 00:51:55.442023 3318 log.go:172] (0xc000b28000) Reply frame received for 5\nI0505 00:51:55.510829 3318 log.go:172] (0xc000b28000) Data frame received for 5\nI0505 00:51:55.510856 3318 log.go:172] (0xc000846d20) (5) Data frame handling\nI0505 00:51:55.510874 3318 log.go:172] (0xc000846d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:51:55.544554 3318 log.go:172] (0xc000b28000) Data frame received for 3\nI0505 00:51:55.544580 3318 log.go:172] (0xc00014fd60) (3) Data frame handling\nI0505 00:51:55.544607 3318 log.go:172] (0xc00014fd60) (3) Data frame sent\nI0505 00:51:55.544787 3318 log.go:172] (0xc000b28000) Data frame received for 3\nI0505 00:51:55.544823 3318 log.go:172] (0xc00014fd60) (3) Data frame handling\nI0505 00:51:55.544852 3318 log.go:172] (0xc000b28000) Data frame received for 5\nI0505 00:51:55.544865 3318 log.go:172] (0xc000846d20) (5) Data frame handling\nI0505 00:51:55.547959 3318 log.go:172] (0xc000b28000) Data frame received for 1\nI0505 00:51:55.547995 3318 log.go:172] (0xc0004221e0) (1) Data frame handling\nI0505 00:51:55.548011 3318 log.go:172] (0xc0004221e0) (1) Data frame sent\nI0505 00:51:55.548028 3318 log.go:172] (0xc000b28000) (0xc0004221e0) Stream removed, broadcasting: 1\nI0505 00:51:55.548468 3318 log.go:172] (0xc000b28000) (0xc0004221e0) Stream removed, broadcasting: 1\nI0505 00:51:55.548492 3318 log.go:172] (0xc000b28000) (0xc00014fd60) Stream removed, broadcasting: 3\nI0505 00:51:55.548735 3318 log.go:172] (0xc000b28000) (0xc000846d20) Stream removed, broadcasting: 5\n" May 5 00:51:55.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:51:55.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 5 00:52:05.597: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 5 00:52:15.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3762 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:52:18.905: INFO: stderr: "I0505 00:52:18.802616 3340 log.go:172] (0xc000982840) (0xc00066af00) Create stream\nI0505 00:52:18.802669 3340 log.go:172] (0xc000982840) (0xc00066af00) Stream added, broadcasting: 1\nI0505 00:52:18.805458 3340 log.go:172] (0xc000982840) Reply frame received for 1\nI0505 00:52:18.805509 3340 log.go:172] (0xc000982840) (0xc000654c80) Create stream\nI0505 00:52:18.805527 3340 log.go:172] (0xc000982840) (0xc000654c80) Stream added, broadcasting: 3\nI0505 00:52:18.806490 3340 log.go:172] (0xc000982840) Reply frame received for 3\nI0505 00:52:18.806538 3340 log.go:172] (0xc000982840) (0xc000646500) Create stream\nI0505 00:52:18.806561 3340 log.go:172] (0xc000982840) (0xc000646500) Stream added, broadcasting: 5\nI0505 00:52:18.807729 3340 log.go:172] (0xc000982840) Reply frame received for 5\nI0505 00:52:18.897560 3340 log.go:172] (0xc000982840) Data frame received for 3\nI0505 00:52:18.897688 3340 log.go:172] (0xc000654c80) (3) Data frame handling\nI0505 00:52:18.897704 3340 log.go:172] (0xc000654c80) (3) Data frame sent\nI0505 00:52:18.897711 3340 log.go:172] (0xc000982840) Data frame received for 3\nI0505 00:52:18.897716 3340 log.go:172] (0xc000654c80) (3) Data frame handling\nI0505 00:52:18.897749 3340 log.go:172] (0xc000982840) Data frame received for 5\nI0505 00:52:18.897763 3340 log.go:172] (0xc000646500) (5) Data frame handling\nI0505 00:52:18.897785 3340 log.go:172] (0xc000646500) (5) Data frame sent\nI0505 00:52:18.897798 3340 log.go:172] (0xc000982840) Data frame received for 5\nI0505 00:52:18.897805 3340 log.go:172] (0xc000646500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:52:18.899785 3340 log.go:172] (0xc000982840) Data frame received for 1\nI0505 00:52:18.899820 3340 log.go:172] (0xc00066af00) (1) Data frame handling\nI0505 00:52:18.899854 3340 log.go:172] (0xc00066af00) (1) Data frame sent\nI0505 00:52:18.899884 3340 log.go:172] (0xc000982840) (0xc00066af00) Stream removed, broadcasting: 1\nI0505 00:52:18.900121 3340 log.go:172] (0xc000982840) Go away received\nI0505 00:52:18.900387 3340 log.go:172] (0xc000982840) (0xc00066af00) Stream removed, broadcasting: 1\nI0505 00:52:18.900413 3340 log.go:172] (0xc000982840) (0xc000654c80) Stream removed, broadcasting: 3\nI0505 00:52:18.900429 3340 log.go:172] (0xc000982840) (0xc000646500) Stream removed, broadcasting: 5\n" May 5 00:52:18.905: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:52:18.905: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:52:29.023: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update May 5 00:52:29.023: INFO: Waiting for Pod statefulset-3762/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 00:52:29.023: INFO: Waiting for Pod statefulset-3762/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 00:52:39.055: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update May 5 00:52:39.055: INFO: Waiting for Pod statefulset-3762/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 5 00:52:49.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3762 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 00:52:49.322: INFO: stderr: "I0505 00:52:49.185606 3371 log.go:172] (0xc00003b290) (0xc0007ef540) Create stream\nI0505 00:52:49.185672 3371 log.go:172] (0xc00003b290) (0xc0007ef540) Stream added, broadcasting: 1\nI0505 00:52:49.190415 3371 log.go:172] (0xc00003b290) Reply frame received for 1\nI0505 00:52:49.190463 3371 log.go:172] (0xc00003b290) (0xc0007065a0) Create stream\nI0505 00:52:49.190481 3371 log.go:172] (0xc00003b290) (0xc0007065a0) Stream added, broadcasting: 3\nI0505 00:52:49.191417 3371 log.go:172] (0xc00003b290) Reply frame received for 3\nI0505 00:52:49.191448 3371 log.go:172] (0xc00003b290) (0xc00053edc0) Create stream\nI0505 00:52:49.191458 3371 log.go:172] (0xc00003b290) (0xc00053edc0) Stream added, broadcasting: 5\nI0505 00:52:49.192209 3371 log.go:172] (0xc00003b290) Reply frame received for 5\nI0505 00:52:49.278892 3371 log.go:172] (0xc00003b290) Data frame received for 5\nI0505 00:52:49.278944 3371 log.go:172] (0xc00053edc0) (5) Data frame handling\nI0505 00:52:49.278989 3371 log.go:172] (0xc00053edc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 00:52:49.314556 3371 log.go:172] (0xc00003b290) Data frame received for 3\nI0505 00:52:49.314617 3371 log.go:172] (0xc0007065a0) (3) Data frame handling\nI0505 00:52:49.314637 3371 log.go:172] (0xc0007065a0) (3) Data frame sent\nI0505 00:52:49.314654 3371 log.go:172] (0xc00003b290) Data frame received for 3\nI0505 00:52:49.314679 3371 log.go:172] (0xc0007065a0) (3) Data frame handling\nI0505 00:52:49.314749 3371 log.go:172] (0xc00003b290) Data frame received for 5\nI0505 00:52:49.314796 3371 log.go:172] (0xc00053edc0) (5) Data frame handling\nI0505 00:52:49.316444 3371 log.go:172] (0xc00003b290) Data frame received for 1\nI0505 00:52:49.316485 3371 log.go:172] (0xc0007ef540) (1) Data frame handling\nI0505 00:52:49.316504 3371 log.go:172] (0xc0007ef540) (1) Data frame sent\nI0505 00:52:49.316527 3371 log.go:172] (0xc00003b290) (0xc0007ef540) Stream removed, broadcasting: 1\nI0505 00:52:49.316917 3371 log.go:172] (0xc00003b290) (0xc0007ef540) Stream removed, broadcasting: 1\nI0505 00:52:49.316938 3371 log.go:172] (0xc00003b290) (0xc0007065a0) Stream removed, broadcasting: 3\nI0505 00:52:49.316949 3371 log.go:172] (0xc00003b290) (0xc00053edc0) Stream removed, broadcasting: 5\n" May 5 00:52:49.322: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 00:52:49.322: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 00:52:59.358: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 5 00:53:09.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3762 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 00:53:09.635: INFO: stderr: "I0505 00:53:09.541409 3391 log.go:172] (0xc0008ab080) (0xc000986640) Create stream\nI0505 00:53:09.541520 3391 log.go:172] (0xc0008ab080) (0xc000986640) Stream added, broadcasting: 1\nI0505 00:53:09.548786 3391 log.go:172] (0xc0008ab080) Reply frame received for 1\nI0505 00:53:09.548834 3391 log.go:172] (0xc0008ab080) (0xc0002692c0) Create stream\nI0505 00:53:09.548846 3391 log.go:172] (0xc0008ab080) (0xc0002692c0) Stream added, broadcasting: 3\nI0505 00:53:09.549806 3391 log.go:172] (0xc0008ab080) Reply frame received for 3\nI0505 00:53:09.549832 3391 log.go:172] (0xc0008ab080) (0xc0004263c0) Create stream\nI0505 00:53:09.549842 3391 log.go:172] (0xc0008ab080) (0xc0004263c0) Stream added, broadcasting: 5\nI0505 00:53:09.550558 3391 log.go:172] (0xc0008ab080) Reply frame received for 5\nI0505 00:53:09.627022 3391 log.go:172] (0xc0008ab080) Data frame received for 3\nI0505 00:53:09.627057 3391 log.go:172] (0xc0002692c0) (3) Data frame handling\nI0505 00:53:09.627072 3391 log.go:172] (0xc0002692c0) (3) Data frame sent\nI0505 00:53:09.627080 3391 log.go:172] (0xc0008ab080) Data frame received for 3\nI0505 00:53:09.627085 3391 log.go:172] (0xc0002692c0) (3) Data frame handling\nI0505 00:53:09.627279 3391 log.go:172] (0xc0008ab080) Data frame received for 5\nI0505 00:53:09.627298 3391 log.go:172] (0xc0004263c0) (5) Data frame handling\nI0505 00:53:09.627314 3391 log.go:172] (0xc0004263c0) (5) Data frame sent\nI0505 00:53:09.627327 3391 log.go:172] (0xc0008ab080) Data frame received for 5\nI0505 00:53:09.627335 3391 log.go:172] (0xc0004263c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 00:53:09.629421 3391 log.go:172] (0xc0008ab080) Data frame received for 1\nI0505 00:53:09.629457 3391 log.go:172] (0xc000986640) (1) Data frame handling\nI0505 00:53:09.629481 3391 log.go:172] (0xc000986640) (1) Data frame sent\nI0505 00:53:09.629515 3391 log.go:172] (0xc0008ab080) (0xc000986640) Stream removed, broadcasting: 1\nI0505 00:53:09.629541 3391 log.go:172] (0xc0008ab080) Go away received\nI0505 00:53:09.630045 3391 log.go:172] (0xc0008ab080) (0xc000986640) Stream removed, broadcasting: 1\nI0505 00:53:09.630071 3391 log.go:172] (0xc0008ab080) (0xc0002692c0) Stream removed, broadcasting: 3\nI0505 00:53:09.630085 3391 log.go:172] (0xc0008ab080) (0xc0004263c0) Stream removed, broadcasting: 5\n" May 5 00:53:09.635: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 00:53:09.635: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 00:53:19.699: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update May 5 00:53:19.699: INFO: Waiting for Pod statefulset-3762/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:19.699: INFO: Waiting for Pod statefulset-3762/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:19.699: INFO: Waiting for Pod statefulset-3762/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:29.707: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update May 5 00:53:29.707: INFO: Waiting for Pod statefulset-3762/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:29.707: INFO: Waiting for Pod statefulset-3762/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:39.764: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update May 5 00:53:39.764: INFO: Waiting for Pod statefulset-3762/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 00:53:49.708: INFO: Waiting for StatefulSet statefulset-3762/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 00:53:59.707: INFO: Deleting all statefulset in ns statefulset-3762 May 5 00:53:59.710: INFO: Scaling statefulset ss2 to 0 May 5 00:54:19.765: INFO: Waiting for statefulset status.replicas updated to 0 May 5 00:54:19.768: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:54:19.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3762" for this suite. • [SLOW TEST:164.630 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":232,"skipped":4008,"failed":0} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:54:19.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 00:54:19.908: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:19.929: INFO: Number of nodes with available pods: 0 May 5 00:54:19.929: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:20.934: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:20.939: INFO: Number of nodes with available pods: 0 May 5 00:54:20.939: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:21.935: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:21.939: INFO: Number of nodes with available pods: 0 May 5 00:54:21.939: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:22.947: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:22.949: INFO: Number of nodes with available pods: 0 May 5 00:54:22.949: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:24.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:24.046: INFO: Number of nodes with available pods: 0 May 5 00:54:24.046: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:24.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:24.939: INFO: Number of nodes with available pods: 1 May 5 00:54:24.939: INFO: Node latest-worker is running more than one daemon pod May 5 00:54:25.946: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:25.963: INFO: Number of nodes with available pods: 2 May 5 00:54:25.963: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 5 00:54:26.076: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:26.097: INFO: Number of nodes with available pods: 1 May 5 00:54:26.097: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:27.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:27.107: INFO: Number of nodes with available pods: 1 May 5 00:54:27.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:28.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:28.107: INFO: Number of nodes with available pods: 1 May 5 00:54:28.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:29.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:29.107: INFO: Number of nodes with available pods: 1 May 5 00:54:29.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:30.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:30.106: INFO: Number of nodes with available pods: 1 May 5 00:54:30.106: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:31.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:31.107: INFO: Number of nodes with available pods: 1 May 5 00:54:31.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:32.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:32.106: INFO: Number of nodes with available pods: 1 May 5 00:54:32.106: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:33.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:33.107: INFO: Number of nodes with available pods: 1 May 5 00:54:33.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:34.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:34.107: INFO: Number of nodes with available pods: 1 May 5 00:54:34.107: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:35.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:35.108: INFO: Number of nodes with available pods: 1 May 5 00:54:35.108: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:36.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:36.106: INFO: Number of nodes with available pods: 1 May 5 00:54:36.106: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:37.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:37.183: INFO: Number of nodes with available pods: 1 May 5 00:54:37.183: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:38.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:38.105: INFO: Number of nodes with available pods: 1 May 5 00:54:38.105: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:39.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:39.106: INFO: Number of nodes with available pods: 1 May 5 00:54:39.106: INFO: Node latest-worker2 is running more than one daemon pod May 5 00:54:40.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:54:40.107: INFO: Number of nodes with available pods: 2 May 5 00:54:40.107: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5837, will wait for the garbage collector to delete the pods May 5 00:54:40.168: INFO: Deleting DaemonSet.extensions daemon-set took: 6.608733ms May 5 00:54:40.268: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.220608ms May 5 00:54:44.972: INFO: Number of nodes with available pods: 0 May 5 00:54:44.972: INFO: Number of running nodes: 0, number of available pods: 0 May 5 00:54:45.006: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5837/daemonsets","resourceVersion":"1537849"},"items":null} May 5 00:54:45.009: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5837/pods","resourceVersion":"1537849"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:54:45.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5837" for this suite. • [SLOW TEST:25.221 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":233,"skipped":4008,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:54:45.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 5 00:54:45.163: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 5 00:54:45.927: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 5 00:54:48.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236886, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236886, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236885, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:54:51.209: INFO: Waited 628.807498ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:54:51.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1225" for this suite. • [SLOW TEST:6.729 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":234,"skipped":4018,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:54:51.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:54:59.785: INFO: DNS probes using dns-test-a4489210-e512-4f25-8fed-895054975a0d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:55:05.926: INFO: File wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:05.929: INFO: File jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:05.929: INFO: Lookups using dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b failed for: [wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local] May 5 00:55:10.935: INFO: File wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:10.939: INFO: File jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:10.939: INFO: Lookups using dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b failed for: [wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local] May 5 00:55:15.934: INFO: File wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:15.939: INFO: Lookups using dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b failed for: [wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local] May 5 00:55:20.935: INFO: File wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local from pod dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 00:55:20.939: INFO: Lookups using dns-2405/dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b failed for: [wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local] May 5 00:55:25.939: INFO: DNS probes using dns-test-c67e4e88-2e06-43de-9d2e-c00669df156b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2405.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2405.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 00:55:34.879: INFO: DNS probes using dns-test-db7eea99-7fb5-47cc-b53c-421c5e8b313f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:55:35.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2405" for this suite. • [SLOW TEST:43.357 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":235,"skipped":4027,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:55:35.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 00:55:41.207: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:55:41.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7179" for this suite. • [SLOW TEST:6.170 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":4038,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:55:41.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 5 00:55:45.930: INFO: Successfully updated pod "pod-update-activedeadlineseconds-adafb42c-985b-4ba8-a2b7-5c098ccbfa4b" May 5 00:55:45.930: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-adafb42c-985b-4ba8-a2b7-5c098ccbfa4b" in namespace "pods-8876" to be "terminated due to deadline exceeded" May 5 00:55:45.935: INFO: Pod "pod-update-activedeadlineseconds-adafb42c-985b-4ba8-a2b7-5c098ccbfa4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.977001ms May 5 00:55:47.939: INFO: Pod "pod-update-activedeadlineseconds-adafb42c-985b-4ba8-a2b7-5c098ccbfa4b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.008751889s May 5 00:55:47.939: INFO: Pod "pod-update-activedeadlineseconds-adafb42c-985b-4ba8-a2b7-5c098ccbfa4b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:55:47.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8876" for this suite. • [SLOW TEST:6.662 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":4058,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:55:47.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a6f237d7-cb6f-4cb4-814b-02909130a92f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a6f237d7-cb6f-4cb4-814b-02909130a92f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:55:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6165" for this suite. • [SLOW TEST:8.175 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":4067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:55:56.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:55:56.853: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:55:58.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 00:56:01.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236956, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:56:04.029: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:04.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2942" for this suite. STEP: Destroying namespace "webhook-2942-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.084 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":239,"skipped":4118,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:04.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 5 00:56:04.309: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 5 00:56:14.943: INFO: >>> kubeConfig: /root/.kube/config May 5 00:56:16.867: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:27.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2429" for this suite. • [SLOW TEST:23.331 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":240,"skipped":4135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:27.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:56:27.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53" in namespace "downward-api-5947" to be "Succeeded or Failed" May 5 00:56:27.609: INFO: Pod "downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257755ms May 5 00:56:29.614: INFO: Pod "downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006732023s May 5 00:56:31.617: INFO: Pod "downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010559784s STEP: Saw pod success May 5 00:56:31.617: INFO: Pod "downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53" satisfied condition "Succeeded or Failed" May 5 00:56:31.621: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53 container client-container: STEP: delete the pod May 5 00:56:31.671: INFO: Waiting for pod downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53 to disappear May 5 00:56:31.685: INFO: Pod downwardapi-volume-8ce4b488-914a-41a6-abd2-2fd8fcbb0c53 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:31.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5947" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":4175,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:31.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 00:56:32.554: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 00:56:34.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236992, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236992, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236992, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724236992, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 00:56:37.768: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 5 00:56:37.788: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:37.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6384" for this suite. STEP: Destroying namespace "webhook-6384-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.369 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":242,"skipped":4186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:38.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f62ca1d9-7ac4-4bbc-ad8d-f34b1f3ace68 STEP: Creating a pod to test consume secrets May 5 00:56:39.065: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec" in namespace "projected-4028" to be "Succeeded or Failed" May 5 00:56:39.185: INFO: Pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec": Phase="Pending", Reason="", readiness=false. Elapsed: 120.268198ms May 5 00:56:41.228: INFO: Pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163155428s May 5 00:56:43.232: INFO: Pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec": Phase="Running", Reason="", readiness=true. Elapsed: 4.16724343s May 5 00:56:45.265: INFO: Pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199350978s STEP: Saw pod success May 5 00:56:45.265: INFO: Pod "pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec" satisfied condition "Succeeded or Failed" May 5 00:56:45.268: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec container projected-secret-volume-test: STEP: delete the pod May 5 00:56:45.288: INFO: Waiting for pod pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec to disappear May 5 00:56:45.292: INFO: Pod pod-projected-secrets-10af28a0-2425-4f79-9010-57920d5f71ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:45.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4028" for this suite. • [SLOW TEST:7.234 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":4212,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:45.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 00:56:45.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a" in namespace "downward-api-1751" to be "Succeeded or Failed" May 5 00:56:45.486: INFO: Pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.129135ms May 5 00:56:47.514: INFO: Pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039725987s May 5 00:56:49.519: INFO: Pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a": Phase="Running", Reason="", readiness=true. Elapsed: 4.044540813s May 5 00:56:51.523: INFO: Pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048458504s STEP: Saw pod success May 5 00:56:51.523: INFO: Pod "downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a" satisfied condition "Succeeded or Failed" May 5 00:56:51.525: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a container client-container: STEP: delete the pod May 5 00:56:51.602: INFO: Waiting for pod downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a to disappear May 5 00:56:51.620: INFO: Pod downwardapi-volume-8c9c247b-9323-4d57-b9d0-7725565b222a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:56:51.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1751" for this suite. • [SLOW TEST:6.342 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":4215,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:56:51.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:56:51.846: INFO: Create a RollingUpdate DaemonSet May 5 00:56:51.881: INFO: Check that daemon pods launch on every node of the cluster May 5 00:56:51.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:56:51.924: INFO: Number of nodes with available pods: 0 May 5 00:56:51.924: INFO: Node latest-worker is running more than one daemon pod May 5 00:56:52.929: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:56:52.934: INFO: Number of nodes with available pods: 0 May 5 00:56:52.934: INFO: Node latest-worker is running more than one daemon pod May 5 00:56:54.051: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:56:54.055: INFO: Number of nodes with available pods: 0 May 5 00:56:54.055: INFO: Node latest-worker is running more than one daemon pod May 5 00:56:54.929: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:56:54.932: INFO: Number of nodes with available pods: 0 May 5 00:56:54.932: INFO: Node latest-worker is running more than one daemon pod May 5 00:56:55.930: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:56:55.934: INFO: Number of nodes with available pods: 2 May 5 00:56:55.934: INFO: Number of running nodes: 2, number of available pods: 2 May 5 00:56:55.934: INFO: Update the DaemonSet to trigger a rollout May 5 00:56:55.942: INFO: Updating DaemonSet daemon-set May 5 00:57:00.004: INFO: Roll back the DaemonSet before rollout is complete May 5 00:57:00.010: INFO: Updating DaemonSet daemon-set May 5 00:57:00.010: INFO: Make sure DaemonSet rollback is complete May 5 00:57:00.018: INFO: Wrong image for pod: daemon-set-hj7v7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 5 00:57:00.018: INFO: Pod daemon-set-hj7v7 is not available May 5 00:57:00.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:57:01.097: INFO: Wrong image for pod: daemon-set-hj7v7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 5 00:57:01.097: INFO: Pod daemon-set-hj7v7 is not available May 5 00:57:01.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 00:57:02.097: INFO: Pod daemon-set-b2w8z is not available May 5 00:57:02.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4241, will wait for the garbage collector to delete the pods May 5 00:57:02.165: INFO: Deleting DaemonSet.extensions daemon-set took: 6.552805ms May 5 00:57:02.465: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292307ms May 5 00:57:05.570: INFO: Number of nodes with available pods: 0 May 5 00:57:05.570: INFO: Number of running nodes: 0, number of available pods: 0 May 5 00:57:05.572: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4241/daemonsets","resourceVersion":"1538867"},"items":null} May 5 00:57:05.575: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4241/pods","resourceVersion":"1538867"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:05.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4241" for this suite. • [SLOW TEST:13.952 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":245,"skipped":4218,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:05.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 00:57:05.642: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:09.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-187" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-78aa5285-e787-41f2-82bf-c302f6a011e5 STEP: Creating a pod to test consume configMaps May 5 00:57:09.965: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8" in namespace "projected-2371" to be "Succeeded or Failed" May 5 00:57:09.985: INFO: Pod "pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.533838ms May 5 00:57:11.989: INFO: Pod "pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024016735s May 5 00:57:13.994: INFO: Pod "pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028652106s STEP: Saw pod success May 5 00:57:13.994: INFO: Pod "pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8" satisfied condition "Succeeded or Failed" May 5 00:57:13.997: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8 container projected-configmap-volume-test: STEP: delete the pod May 5 00:57:14.036: INFO: Waiting for pod pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8 to disappear May 5 00:57:14.048: INFO: Pod pod-projected-configmaps-661f0b3f-a863-4d37-9af6-fadfe0c949f8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:14.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2371" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4285,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:14.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-crlh8 in namespace proxy-2003 I0505 00:57:14.203846 7 runners.go:190] Created replication controller with name: proxy-service-crlh8, namespace: proxy-2003, replica count: 1 I0505 00:57:15.254285 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:57:16.254549 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:57:17.254763 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:57:18.255007 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:19.255219 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:20.255427 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:21.255616 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:22.255872 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:23.256171 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:24.256403 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:25.256728 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:26.257033 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 00:57:27.257395 7 runners.go:190] proxy-service-crlh8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:57:27.260: INFO: setup took 13.112362939s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 5 00:57:27.267: INFO: (0) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.977167ms) May 5 00:57:27.267: INFO: (0) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 6.801849ms) May 5 00:57:27.268: INFO: (0) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 7.120738ms) May 5 00:57:27.268: INFO: (0) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 7.260266ms) May 5 00:57:27.268: INFO: (0) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 7.368364ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 9.151745ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 9.26698ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 9.949239ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 9.526175ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 9.962289ms) May 5 00:57:27.270: INFO: (0) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 9.984566ms) May 5 00:57:27.276: INFO: (0) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 4.242492ms) May 5 00:57:27.281: INFO: (1) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 4.711452ms) May 5 00:57:27.281: INFO: (1) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 4.856359ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.152832ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.157764ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.222527ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.228134ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.391452ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 5.348716ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 5.453854ms) May 5 00:57:27.282: INFO: (1) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 6.159403ms) May 5 00:57:27.286: INFO: (2) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 2.943073ms) May 5 00:57:27.286: INFO: (2) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 2.988346ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.551861ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.502515ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.627582ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.647961ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 5.673321ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.667505ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 5.759653ms) May 5 00:57:27.288: INFO: (2) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.940121ms) May 5 00:57:27.289: INFO: (2) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.939091ms) May 5 00:57:27.289: INFO: (2) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 6.237781ms) May 5 00:57:27.289: INFO: (2) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 6.287167ms) May 5 00:57:27.289: INFO: (2) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 6.292077ms) May 5 00:57:27.292: INFO: (3) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 3.394667ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 3.381043ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 3.928971ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 3.943535ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.064105ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 4.092712ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.129742ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.124706ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 4.146212ms) May 5 00:57:27.293: INFO: (3) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 4.815238ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 4.906956ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.293229ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 5.402853ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.456004ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 5.411488ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 5.517398ms) May 5 00:57:27.300: INFO: (4) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.461512ms) May 5 00:57:27.301: INFO: (4) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 6.082121ms) May 5 00:57:27.307: INFO: (5) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 6.34329ms) May 5 00:57:27.307: INFO: (5) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 6.416092ms) May 5 00:57:27.307: INFO: (5) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 7.031404ms) May 5 00:57:27.308: INFO: (5) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 7.083713ms) May 5 00:57:27.313: INFO: (6) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.913434ms) May 5 00:57:27.313: INFO: (6) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 5.311056ms) May 5 00:57:27.313: INFO: (6) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.285234ms) May 5 00:57:27.313: INFO: (6) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.273006ms) May 5 00:57:27.313: INFO: (6) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.359394ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 5.572812ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.652186ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.759423ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 5.631596ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 5.773231ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 5.846687ms) May 5 00:57:27.314: INFO: (6) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.942487ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.021719ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.031527ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.34903ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.42119ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.426348ms) May 5 00:57:27.318: INFO: (7) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 4.448165ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.633867ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.58661ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.609252ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 5.821581ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.673451ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 6.085474ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.971988ms) May 5 00:57:27.320: INFO: (7) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.989021ms) May 5 00:57:27.326: INFO: (8) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.6992ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 6.509642ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 6.485252ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 6.491711ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 6.54171ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 6.692343ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 6.585748ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 6.696799ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 6.638042ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 6.700978ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 6.628647ms) May 5 00:57:27.327: INFO: (8) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 6.707825ms) May 5 00:57:27.334: INFO: (9) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 6.880142ms) May 5 00:57:27.334: INFO: (9) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 7.415154ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 7.588386ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 7.605049ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 7.661505ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 7.6424ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 7.817364ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 7.902512ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 7.895296ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 7.97617ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 7.993536ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 8.138135ms) May 5 00:57:27.335: INFO: (9) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 8.198603ms) May 5 00:57:27.336: INFO: (9) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 8.853324ms) May 5 00:57:27.336: INFO: (9) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 8.8867ms) May 5 00:57:27.340: INFO: (10) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 3.585154ms) May 5 00:57:27.340: INFO: (10) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 3.809142ms) May 5 00:57:27.340: INFO: (10) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 4.355082ms) May 5 00:57:27.340: INFO: (10) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 4.39657ms) May 5 00:57:27.340: INFO: (10) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 4.075068ms) May 5 00:57:27.346: INFO: (11) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.607149ms) May 5 00:57:27.346: INFO: (11) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.598164ms) May 5 00:57:27.346: INFO: (11) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.688214ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 5.125108ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.162393ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 5.353829ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 5.456148ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.40286ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.400472ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.482649ms) May 5 00:57:27.347: INFO: (11) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 5.322995ms) May 5 00:57:27.353: INFO: (12) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 5.375511ms) May 5 00:57:27.353: INFO: (12) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.368847ms) May 5 00:57:27.353: INFO: (12) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 5.350265ms) May 5 00:57:27.353: INFO: (12) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.3961ms) May 5 00:57:27.353: INFO: (12) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.453316ms) May 5 00:57:27.354: INFO: (12) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 6.327233ms) May 5 00:57:27.354: INFO: (12) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 6.262399ms) May 5 00:57:27.354: INFO: (12) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 6.279972ms) May 5 00:57:27.354: INFO: (12) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 6.362403ms) May 5 00:57:27.354: INFO: (12) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 6.298791ms) May 5 00:57:27.357: INFO: (13) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 3.306202ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.078573ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.041128ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.036101ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 4.574867ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 4.685047ms) May 5 00:57:27.358: INFO: (13) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 4.670509ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 4.812923ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.700616ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 4.795114ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 4.896463ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 4.89049ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.051151ms) May 5 00:57:27.359: INFO: (13) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.104055ms) May 5 00:57:27.362: INFO: (14) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 3.029114ms) May 5 00:57:27.362: INFO: (14) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 3.495676ms) May 5 00:57:27.363: INFO: (14) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 3.817029ms) May 5 00:57:27.363: INFO: (14) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 3.828243ms) May 5 00:57:27.363: INFO: (14) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.018676ms) May 5 00:57:27.363: INFO: (14) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.068363ms) May 5 00:57:27.363: INFO: (14) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.162007ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.035368ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.207324ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.110044ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.272699ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.133871ms) May 5 00:57:27.364: INFO: (14) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.155344ms) May 5 00:57:27.368: INFO: (15) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 3.253035ms) May 5 00:57:27.368: INFO: (15) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 4.609095ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 4.567605ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 4.60782ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 4.604739ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 4.582621ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 4.675826ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.616987ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.830572ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.842582ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.811334ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.938408ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.986316ms) May 5 00:57:27.369: INFO: (15) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 5.072452ms) May 5 00:57:27.373: INFO: (16) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 3.490107ms) May 5 00:57:27.373: INFO: (16) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 3.755953ms) May 5 00:57:27.373: INFO: (16) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 3.821682ms) May 5 00:57:27.373: INFO: (16) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 3.827013ms) May 5 00:57:27.373: INFO: (16) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 3.998091ms) May 5 00:57:27.374: INFO: (16) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 4.116089ms) May 5 00:57:27.374: INFO: (16) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.13697ms) May 5 00:57:27.374: INFO: (16) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: ... (200; 4.327207ms) May 5 00:57:27.374: INFO: (16) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 4.484184ms) May 5 00:57:27.374: INFO: (16) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 4.775901ms) May 5 00:57:27.375: INFO: (16) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.450076ms) May 5 00:57:27.375: INFO: (16) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 5.485671ms) May 5 00:57:27.375: INFO: (16) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 5.472338ms) May 5 00:57:27.375: INFO: (16) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 5.523507ms) May 5 00:57:27.375: INFO: (16) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.51778ms) May 5 00:57:27.378: INFO: (17) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:1080/proxy/: test<... (200; 2.977574ms) May 5 00:57:27.378: INFO: (17) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test (200; 3.775296ms) May 5 00:57:27.379: INFO: (17) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 4.093275ms) May 5 00:57:27.379: INFO: (17) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.137193ms) May 5 00:57:27.379: INFO: (17) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.126882ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.409175ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 4.411003ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 4.355895ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.447054ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.408623ms) May 5 00:57:27.380: INFO: (17) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 4.993125ms) May 5 00:57:27.381: INFO: (17) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.298661ms) May 5 00:57:27.384: INFO: (18) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 3.217743ms) May 5 00:57:27.385: INFO: (18) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.73235ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.127197ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 5.153526ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:160/proxy/: foo (200; 5.147558ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 5.127723ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 5.155621ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 5.146607ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 5.2178ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 5.276944ms) May 5 00:57:27.386: INFO: (18) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 5.317962ms) May 5 00:57:27.390: INFO: (19) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname2/proxy/: tls qux (200; 4.01037ms) May 5 00:57:27.390: INFO: (19) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:460/proxy/: tls baz (200; 4.183531ms) May 5 00:57:27.390: INFO: (19) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname1/proxy/: foo (200; 4.102884ms) May 5 00:57:27.390: INFO: (19) /api/v1/namespaces/proxy-2003/services/http:proxy-service-crlh8:portname2/proxy/: bar (200; 4.157291ms) May 5 00:57:27.390: INFO: (19) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname2/proxy/: bar (200; 4.166397ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:1080/proxy/: ... (200; 4.262485ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:160/proxy/: foo (200; 4.322574ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:462/proxy/: tls qux (200; 4.4124ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/services/proxy-service-crlh8:portname1/proxy/: foo (200; 4.757194ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx/proxy/: test (200; 4.695011ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.686684ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/http:proxy-service-crlh8-88gmx:162/proxy/: bar (200; 4.727659ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/pods/https:proxy-service-crlh8-88gmx:443/proxy/: test<... (200; 4.808751ms) May 5 00:57:27.391: INFO: (19) /api/v1/namespaces/proxy-2003/services/https:proxy-service-crlh8:tlsportname1/proxy/: tls baz (200; 5.096077ms) STEP: deleting ReplicationController proxy-service-crlh8 in namespace proxy-2003, will wait for the garbage collector to delete the pods May 5 00:57:27.452: INFO: Deleting ReplicationController proxy-service-crlh8 took: 8.530429ms May 5 00:57:27.752: INFO: Terminating ReplicationController proxy-service-crlh8 pods took: 300.244041ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:34.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2003" for this suite. • [SLOW TEST:20.901 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":248,"skipped":4296,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:34.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:35.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2684" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":249,"skipped":4299,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:35.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 5 00:57:35.271: INFO: Waiting up to 5m0s for pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9" in namespace "emptydir-4206" to be "Succeeded or Failed" May 5 00:57:35.274: INFO: Pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.388649ms May 5 00:57:37.279: INFO: Pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007932416s May 5 00:57:39.282: INFO: Pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9": Phase="Running", Reason="", readiness=true. Elapsed: 4.011282417s May 5 00:57:41.286: INFO: Pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015695716s STEP: Saw pod success May 5 00:57:41.287: INFO: Pod "pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9" satisfied condition "Succeeded or Failed" May 5 00:57:41.290: INFO: Trying to get logs from node latest-worker pod pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9 container test-container: STEP: delete the pod May 5 00:57:41.320: INFO: Waiting for pod pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9 to disappear May 5 00:57:41.366: INFO: Pod pod-8b6c5157-e83a-4039-9439-3fe71c86b2d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:41.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4206" for this suite. • [SLOW TEST:6.207 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4315,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:41.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-810339ba-650f-4996-949c-51d3cc34c19f STEP: Creating a pod to test consume configMaps May 5 00:57:41.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8" in namespace "configmap-6245" to be "Succeeded or Failed" May 5 00:57:41.527: INFO: Pod "pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.508699ms May 5 00:57:43.547: INFO: Pod "pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040409876s May 5 00:57:45.649: INFO: Pod "pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142287329s STEP: Saw pod success May 5 00:57:45.649: INFO: Pod "pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8" satisfied condition "Succeeded or Failed" May 5 00:57:45.655: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8 container configmap-volume-test: STEP: delete the pod May 5 00:57:45.854: INFO: Waiting for pod pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8 to disappear May 5 00:57:45.882: INFO: Pod pod-configmaps-361c4bfb-6d92-443e-8ac6-3a1f76cf15c8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:45.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6245" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4315,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:45.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 5 00:57:46.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-1249 -- logs-generator --log-lines-total 100 --run-duration 20s' May 5 00:57:46.149: INFO: stderr: "" May 5 00:57:46.149: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 5 00:57:46.149: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 5 00:57:46.149: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1249" to be "running and ready, or succeeded" May 5 00:57:46.166: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.802496ms May 5 00:57:48.170: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021282471s May 5 00:57:50.174: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.025517764s May 5 00:57:50.174: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 5 00:57:50.174: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 5 00:57:50.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249' May 5 00:57:50.304: INFO: stderr: "" May 5 00:57:50.304: INFO: stdout: "I0505 00:57:49.082331 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/nv6 429\nI0505 00:57:49.282558 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/v7c 200\nI0505 00:57:49.482537 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/glm 440\nI0505 00:57:49.682573 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/s7f 500\nI0505 00:57:49.882480 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/zqlz 587\nI0505 00:57:50.082552 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/g4fw 373\nI0505 00:57:50.282455 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/j45k 469\n" STEP: limiting log lines May 5 00:57:50.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249 --tail=1' May 5 00:57:50.411: INFO: stderr: "" May 5 00:57:50.411: INFO: stdout: "I0505 00:57:50.282455 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/j45k 469\n" May 5 00:57:50.411: INFO: got output "I0505 00:57:50.282455 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/j45k 469\n" STEP: limiting log bytes May 5 00:57:50.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249 --limit-bytes=1' May 5 00:57:50.520: INFO: stderr: "" May 5 00:57:50.520: INFO: stdout: "I" May 5 00:57:50.520: INFO: got output "I" STEP: exposing timestamps May 5 00:57:50.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249 --tail=1 --timestamps' May 5 00:57:50.630: INFO: stderr: "" May 5 00:57:50.630: INFO: stdout: "2020-05-05T00:57:50.48264626Z I0505 00:57:50.482475 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/tg4 332\n" May 5 00:57:50.630: INFO: got output "2020-05-05T00:57:50.48264626Z I0505 00:57:50.482475 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/tg4 332\n" STEP: restricting to a time range May 5 00:57:53.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249 --since=1s' May 5 00:57:53.255: INFO: stderr: "" May 5 00:57:53.255: INFO: stdout: "I0505 00:57:52.282474 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/fv2 507\nI0505 00:57:52.482519 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/5gld 554\nI0505 00:57:52.682471 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/rtq 599\nI0505 00:57:52.882519 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/nn4 596\nI0505 00:57:53.082577 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/lmwh 450\n" May 5 00:57:53.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1249 --since=24h' May 5 00:57:53.365: INFO: stderr: "" May 5 00:57:53.365: INFO: stdout: "I0505 00:57:49.082331 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/nv6 429\nI0505 00:57:49.282558 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/v7c 200\nI0505 00:57:49.482537 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/glm 440\nI0505 00:57:49.682573 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/s7f 500\nI0505 00:57:49.882480 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/zqlz 587\nI0505 00:57:50.082552 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/g4fw 373\nI0505 00:57:50.282455 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/j45k 469\nI0505 00:57:50.482475 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/tg4 332\nI0505 00:57:50.682503 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/tdsd 322\nI0505 00:57:50.882524 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/vprv 336\nI0505 00:57:51.082506 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/6fp 551\nI0505 00:57:51.282516 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/tckv 202\nI0505 00:57:51.482497 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/9dq 530\nI0505 00:57:51.682523 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/rqt8 437\nI0505 00:57:51.882493 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/xh2z 247\nI0505 00:57:52.082582 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/6fg 507\nI0505 00:57:52.282474 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/fv2 507\nI0505 00:57:52.482519 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/5gld 554\nI0505 00:57:52.682471 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/rtq 599\nI0505 00:57:52.882519 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/nn4 596\nI0505 00:57:53.082577 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/lmwh 450\nI0505 00:57:53.282487 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/swd 462\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 5 00:57:53.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1249' May 5 00:57:55.715: INFO: stderr: "" May 5 00:57:55.716: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:57:55.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1249" for this suite. • [SLOW TEST:9.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":252,"skipped":4321,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:57:55.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-87054e29-9490-4e89-95ce-cd22c52e3dae STEP: Creating a pod to test consume configMaps May 5 00:57:55.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685" in namespace "configmap-9500" to be "Succeeded or Failed" May 5 00:57:55.922: INFO: Pod "pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685": Phase="Pending", Reason="", readiness=false. Elapsed: 3.631406ms May 5 00:57:57.926: INFO: Pod "pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008151308s May 5 00:57:59.954: INFO: Pod "pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036209384s STEP: Saw pod success May 5 00:57:59.954: INFO: Pod "pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685" satisfied condition "Succeeded or Failed" May 5 00:57:59.957: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685 container configmap-volume-test: STEP: delete the pod May 5 00:58:00.087: INFO: Waiting for pod pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685 to disappear May 5 00:58:00.096: INFO: Pod pod-configmaps-ac9f7760-ab33-4c10-990d-262c44740685 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:58:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9500" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:58:00.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8894 May 5 00:58:04.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 5 00:58:04.512: INFO: stderr: "I0505 00:58:04.398797 3573 log.go:172] (0xc0009a9290) (0xc000ae8320) Create stream\nI0505 00:58:04.398853 3573 log.go:172] (0xc0009a9290) (0xc000ae8320) Stream added, broadcasting: 1\nI0505 00:58:04.403748 3573 log.go:172] (0xc0009a9290) Reply frame received for 1\nI0505 00:58:04.403792 3573 log.go:172] (0xc0009a9290) (0xc000742000) Create stream\nI0505 00:58:04.403803 3573 log.go:172] (0xc0009a9290) (0xc000742000) Stream added, broadcasting: 3\nI0505 00:58:04.404846 3573 log.go:172] (0xc0009a9290) Reply frame received for 3\nI0505 00:58:04.404882 3573 log.go:172] (0xc0009a9290) (0xc0007166e0) Create stream\nI0505 00:58:04.404911 3573 log.go:172] (0xc0009a9290) (0xc0007166e0) Stream added, broadcasting: 5\nI0505 00:58:04.406068 3573 log.go:172] (0xc0009a9290) Reply frame received for 5\nI0505 00:58:04.495800 3573 log.go:172] (0xc0009a9290) Data frame received for 5\nI0505 00:58:04.495836 3573 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0505 00:58:04.495860 3573 log.go:172] (0xc0007166e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0505 00:58:04.501072 3573 log.go:172] (0xc0009a9290) Data frame received for 3\nI0505 00:58:04.501107 3573 log.go:172] (0xc000742000) (3) Data frame handling\nI0505 00:58:04.501429 3573 log.go:172] (0xc000742000) (3) Data frame sent\nI0505 00:58:04.502177 3573 log.go:172] (0xc0009a9290) Data frame received for 5\nI0505 00:58:04.502238 3573 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0505 00:58:04.502278 3573 log.go:172] (0xc0009a9290) Data frame received for 3\nI0505 00:58:04.502312 3573 log.go:172] (0xc000742000) (3) Data frame handling\nI0505 00:58:04.504006 3573 log.go:172] (0xc0009a9290) Data frame received for 1\nI0505 00:58:04.504037 3573 log.go:172] (0xc000ae8320) (1) Data frame handling\nI0505 00:58:04.504063 3573 log.go:172] (0xc000ae8320) (1) Data frame sent\nI0505 00:58:04.504082 3573 log.go:172] (0xc0009a9290) (0xc000ae8320) Stream removed, broadcasting: 1\nI0505 00:58:04.504102 3573 log.go:172] (0xc0009a9290) Go away received\nI0505 00:58:04.504585 3573 log.go:172] (0xc0009a9290) (0xc000ae8320) Stream removed, broadcasting: 1\nI0505 00:58:04.504603 3573 log.go:172] (0xc0009a9290) (0xc000742000) Stream removed, broadcasting: 3\nI0505 00:58:04.504611 3573 log.go:172] (0xc0009a9290) (0xc0007166e0) Stream removed, broadcasting: 5\n" May 5 00:58:04.513: INFO: stdout: "iptables" May 5 00:58:04.513: INFO: proxyMode: iptables May 5 00:58:04.517: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:04.530: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:06.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:06.534: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:08.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:08.534: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:10.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:10.534: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:12.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:12.534: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:14.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:14.534: INFO: Pod kube-proxy-mode-detector still exists May 5 00:58:16.530: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 5 00:58:16.534: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8894 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8894 I0505 00:58:16.639244 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8894, replica count: 3 I0505 00:58:19.689637 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 00:58:22.689845 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 00:58:22.761: INFO: Creating new exec pod May 5 00:58:27.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 5 00:58:28.050: INFO: stderr: "I0505 00:58:27.977545 3593 log.go:172] (0xc0009606e0) (0xc0002d94a0) Create stream\nI0505 00:58:27.977612 3593 log.go:172] (0xc0009606e0) (0xc0002d94a0) Stream added, broadcasting: 1\nI0505 00:58:27.980227 3593 log.go:172] (0xc0009606e0) Reply frame received for 1\nI0505 00:58:27.980281 3593 log.go:172] (0xc0009606e0) (0xc000b0c000) Create stream\nI0505 00:58:27.980296 3593 log.go:172] (0xc0009606e0) (0xc000b0c000) Stream added, broadcasting: 3\nI0505 00:58:27.981894 3593 log.go:172] (0xc0009606e0) Reply frame received for 3\nI0505 00:58:27.981932 3593 log.go:172] (0xc0009606e0) (0xc000b0c0a0) Create stream\nI0505 00:58:27.981947 3593 log.go:172] (0xc0009606e0) (0xc000b0c0a0) Stream added, broadcasting: 5\nI0505 00:58:27.982991 3593 log.go:172] (0xc0009606e0) Reply frame received for 5\nI0505 00:58:28.042520 3593 log.go:172] (0xc0009606e0) Data frame received for 5\nI0505 00:58:28.042549 3593 log.go:172] (0xc000b0c0a0) (5) Data frame handling\nI0505 00:58:28.042570 3593 log.go:172] (0xc000b0c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0505 00:58:28.042902 3593 log.go:172] (0xc0009606e0) Data frame received for 5\nI0505 00:58:28.042935 3593 log.go:172] (0xc000b0c0a0) (5) Data frame handling\nI0505 00:58:28.042963 3593 log.go:172] (0xc000b0c0a0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0505 00:58:28.043296 3593 log.go:172] (0xc0009606e0) Data frame received for 3\nI0505 00:58:28.043391 3593 log.go:172] (0xc0009606e0) Data frame received for 5\nI0505 00:58:28.043424 3593 log.go:172] (0xc000b0c0a0) (5) Data frame handling\nI0505 00:58:28.043452 3593 log.go:172] (0xc000b0c000) (3) Data frame handling\nI0505 00:58:28.045068 3593 log.go:172] (0xc0009606e0) Data frame received for 1\nI0505 00:58:28.045092 3593 log.go:172] (0xc0002d94a0) (1) Data frame handling\nI0505 00:58:28.045104 3593 log.go:172] (0xc0002d94a0) (1) Data frame sent\nI0505 00:58:28.045279 3593 log.go:172] (0xc0009606e0) (0xc0002d94a0) Stream removed, broadcasting: 1\nI0505 00:58:28.045586 3593 log.go:172] (0xc0009606e0) Go away received\nI0505 00:58:28.045649 3593 log.go:172] (0xc0009606e0) (0xc0002d94a0) Stream removed, broadcasting: 1\nI0505 00:58:28.045702 3593 log.go:172] (0xc0009606e0) (0xc000b0c000) Stream removed, broadcasting: 3\nI0505 00:58:28.045720 3593 log.go:172] (0xc0009606e0) (0xc000b0c0a0) Stream removed, broadcasting: 5\n" May 5 00:58:28.050: INFO: stdout: "" May 5 00:58:28.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c nc -zv -t -w 2 10.106.171.153 80' May 5 00:58:28.270: INFO: stderr: "I0505 00:58:28.183365 3615 log.go:172] (0xc0000e0370) (0xc000500a00) Create stream\nI0505 00:58:28.183434 3615 log.go:172] (0xc0000e0370) (0xc000500a00) Stream added, broadcasting: 1\nI0505 00:58:28.188154 3615 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0505 00:58:28.188302 3615 log.go:172] (0xc0000e0370) (0xc00082caa0) Create stream\nI0505 00:58:28.188376 3615 log.go:172] (0xc0000e0370) (0xc00082caa0) Stream added, broadcasting: 3\nI0505 00:58:28.190671 3615 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0505 00:58:28.190699 3615 log.go:172] (0xc0000e0370) (0xc0005012c0) Create stream\nI0505 00:58:28.190710 3615 log.go:172] (0xc0000e0370) (0xc0005012c0) Stream added, broadcasting: 5\nI0505 00:58:28.191813 3615 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0505 00:58:28.263942 3615 log.go:172] (0xc0000e0370) Data frame received for 5\nI0505 00:58:28.263974 3615 log.go:172] (0xc0005012c0) (5) Data frame handling\nI0505 00:58:28.263987 3615 log.go:172] (0xc0005012c0) (5) Data frame sent\nI0505 00:58:28.263996 3615 log.go:172] (0xc0000e0370) Data frame received for 5\nI0505 00:58:28.264008 3615 log.go:172] (0xc0005012c0) (5) Data frame handling\nI0505 00:58:28.264027 3615 log.go:172] (0xc0000e0370) Data frame received for 3\n+ nc -zv -t -w 2 10.106.171.153 80\nConnection to 10.106.171.153 80 port [tcp/http] succeeded!\nI0505 00:58:28.264036 3615 log.go:172] (0xc00082caa0) (3) Data frame handling\nI0505 00:58:28.265673 3615 log.go:172] (0xc0000e0370) Data frame received for 1\nI0505 00:58:28.265716 3615 log.go:172] (0xc000500a00) (1) Data frame handling\nI0505 00:58:28.265735 3615 log.go:172] (0xc000500a00) (1) Data frame sent\nI0505 00:58:28.265754 3615 log.go:172] (0xc0000e0370) (0xc000500a00) Stream removed, broadcasting: 1\nI0505 00:58:28.265857 3615 log.go:172] (0xc0000e0370) Go away received\nI0505 00:58:28.266126 3615 log.go:172] (0xc0000e0370) (0xc000500a00) Stream removed, broadcasting: 1\nI0505 00:58:28.266147 3615 log.go:172] (0xc0000e0370) (0xc00082caa0) Stream removed, broadcasting: 3\nI0505 00:58:28.266158 3615 log.go:172] (0xc0000e0370) (0xc0005012c0) Stream removed, broadcasting: 5\n" May 5 00:58:28.270: INFO: stdout: "" May 5 00:58:28.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30285' May 5 00:58:28.486: INFO: stderr: "I0505 00:58:28.413374 3635 log.go:172] (0xc000b36210) (0xc0009b0c80) Create stream\nI0505 00:58:28.413429 3635 log.go:172] (0xc000b36210) (0xc0009b0c80) Stream added, broadcasting: 1\nI0505 00:58:28.415614 3635 log.go:172] (0xc000b36210) Reply frame received for 1\nI0505 00:58:28.415689 3635 log.go:172] (0xc000b36210) (0xc0004b6f00) Create stream\nI0505 00:58:28.415720 3635 log.go:172] (0xc000b36210) (0xc0004b6f00) Stream added, broadcasting: 3\nI0505 00:58:28.416895 3635 log.go:172] (0xc000b36210) Reply frame received for 3\nI0505 00:58:28.416933 3635 log.go:172] (0xc000b36210) (0xc000434aa0) Create stream\nI0505 00:58:28.416949 3635 log.go:172] (0xc000b36210) (0xc000434aa0) Stream added, broadcasting: 5\nI0505 00:58:28.418323 3635 log.go:172] (0xc000b36210) Reply frame received for 5\nI0505 00:58:28.476607 3635 log.go:172] (0xc000b36210) Data frame received for 3\nI0505 00:58:28.476675 3635 log.go:172] (0xc0004b6f00) (3) Data frame handling\nI0505 00:58:28.476910 3635 log.go:172] (0xc000b36210) Data frame received for 5\nI0505 00:58:28.476954 3635 log.go:172] (0xc000434aa0) (5) Data frame handling\nI0505 00:58:28.476973 3635 log.go:172] (0xc000434aa0) (5) Data frame sent\nI0505 00:58:28.476984 3635 log.go:172] (0xc000b36210) Data frame received for 5\nI0505 00:58:28.476993 3635 log.go:172] (0xc000434aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30285\nConnection to 172.17.0.13 30285 port [tcp/30285] succeeded!\nI0505 00:58:28.480351 3635 log.go:172] (0xc000b36210) Data frame received for 1\nI0505 00:58:28.480390 3635 log.go:172] (0xc0009b0c80) (1) Data frame handling\nI0505 00:58:28.480417 3635 log.go:172] (0xc0009b0c80) (1) Data frame sent\nI0505 00:58:28.480609 3635 log.go:172] (0xc000b36210) (0xc0009b0c80) Stream removed, broadcasting: 1\nI0505 00:58:28.481421 3635 log.go:172] (0xc000b36210) (0xc0009b0c80) Stream removed, broadcasting: 1\nI0505 00:58:28.481474 3635 log.go:172] (0xc000b36210) (0xc0004b6f00) Stream removed, broadcasting: 3\nI0505 00:58:28.481490 3635 log.go:172] (0xc000b36210) (0xc000434aa0) Stream removed, broadcasting: 5\n" May 5 00:58:28.486: INFO: stdout: "" May 5 00:58:28.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30285' May 5 00:58:28.708: INFO: stderr: "I0505 00:58:28.620238 3655 log.go:172] (0xc000b02d10) (0xc00056fcc0) Create stream\nI0505 00:58:28.620288 3655 log.go:172] (0xc000b02d10) (0xc00056fcc0) Stream added, broadcasting: 1\nI0505 00:58:28.622081 3655 log.go:172] (0xc000b02d10) Reply frame received for 1\nI0505 00:58:28.622113 3655 log.go:172] (0xc000b02d10) (0xc0003640a0) Create stream\nI0505 00:58:28.622126 3655 log.go:172] (0xc000b02d10) (0xc0003640a0) Stream added, broadcasting: 3\nI0505 00:58:28.622837 3655 log.go:172] (0xc000b02d10) Reply frame received for 3\nI0505 00:58:28.622868 3655 log.go:172] (0xc000b02d10) (0xc000308000) Create stream\nI0505 00:58:28.622875 3655 log.go:172] (0xc000b02d10) (0xc000308000) Stream added, broadcasting: 5\nI0505 00:58:28.623592 3655 log.go:172] (0xc000b02d10) Reply frame received for 5\nI0505 00:58:28.700210 3655 log.go:172] (0xc000b02d10) Data frame received for 5\nI0505 00:58:28.700249 3655 log.go:172] (0xc000308000) (5) Data frame handling\nI0505 00:58:28.700282 3655 log.go:172] (0xc000308000) (5) Data frame sent\nI0505 00:58:28.700301 3655 log.go:172] (0xc000b02d10) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.12 30285\nI0505 00:58:28.700316 3655 log.go:172] (0xc000308000) (5) Data frame handling\nI0505 00:58:28.700334 3655 log.go:172] (0xc000308000) (5) Data frame sent\nConnection to 172.17.0.12 30285 port [tcp/30285] succeeded!\nI0505 00:58:28.700671 3655 log.go:172] (0xc000b02d10) Data frame received for 5\nI0505 00:58:28.700779 3655 log.go:172] (0xc000308000) (5) Data frame handling\nI0505 00:58:28.700819 3655 log.go:172] (0xc000b02d10) Data frame received for 3\nI0505 00:58:28.700832 3655 log.go:172] (0xc0003640a0) (3) Data frame handling\nI0505 00:58:28.702508 3655 log.go:172] (0xc000b02d10) Data frame received for 1\nI0505 00:58:28.702542 3655 log.go:172] (0xc00056fcc0) (1) Data frame handling\nI0505 00:58:28.702571 3655 log.go:172] (0xc00056fcc0) (1) Data frame sent\nI0505 00:58:28.702733 3655 log.go:172] (0xc000b02d10) (0xc00056fcc0) Stream removed, broadcasting: 1\nI0505 00:58:28.702775 3655 log.go:172] (0xc000b02d10) Go away received\nI0505 00:58:28.703138 3655 log.go:172] (0xc000b02d10) (0xc00056fcc0) Stream removed, broadcasting: 1\nI0505 00:58:28.703165 3655 log.go:172] (0xc000b02d10) (0xc0003640a0) Stream removed, broadcasting: 3\nI0505 00:58:28.703177 3655 log.go:172] (0xc000b02d10) (0xc000308000) Stream removed, broadcasting: 5\n" May 5 00:58:28.708: INFO: stdout: "" May 5 00:58:28.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30285/ ; done' May 5 00:58:29.012: INFO: stderr: "I0505 00:58:28.846681 3675 log.go:172] (0xc00003ad10) (0xc000606820) Create stream\nI0505 00:58:28.846740 3675 log.go:172] (0xc00003ad10) (0xc000606820) Stream added, broadcasting: 1\nI0505 00:58:28.848712 3675 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0505 00:58:28.848736 3675 log.go:172] (0xc00003ad10) (0xc000607360) Create stream\nI0505 00:58:28.848742 3675 log.go:172] (0xc00003ad10) (0xc000607360) Stream added, broadcasting: 3\nI0505 00:58:28.849888 3675 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0505 00:58:28.849938 3675 log.go:172] (0xc00003ad10) (0xc000607cc0) Create stream\nI0505 00:58:28.849950 3675 log.go:172] (0xc00003ad10) (0xc000607cc0) Stream added, broadcasting: 5\nI0505 00:58:28.850794 3675 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0505 00:58:28.917933 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.917962 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.917977 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.918005 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.918021 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.918033 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.922740 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.922774 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.922810 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.923185 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.923200 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.923207 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.923218 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.923224 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.923229 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.927206 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.927219 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.927232 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.927762 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.927778 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.927789 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.927795 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.927803 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.927808 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\nI0505 00:58:28.927814 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.927817 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.927829 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\nI0505 00:58:28.935583 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.935606 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.935612 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.936078 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.936090 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.936095 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.936127 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.936156 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.936203 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.942363 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.942377 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.942390 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.943275 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.943328 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.943344 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.943381 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.943425 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.943448 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.947787 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.947816 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.947844 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.948344 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.948380 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.948415 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.948466 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.948490 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.948514 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.953035 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.953059 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.953074 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.954122 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.954158 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.954172 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.954192 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.954204 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.954223 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.958984 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.959003 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.959024 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.959234 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.959246 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.959258 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.959297 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.959339 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.959371 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.964923 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.964950 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.964966 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.965503 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.965529 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.965550 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.965619 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.965630 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.965638 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.969743 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.969769 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.969791 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.970746 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.970781 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.970794 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.970811 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.970821 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.970844 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.975614 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.975649 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.975668 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.976030 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.976063 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.976075 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.976092 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.976102 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.976122 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.983645 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.983720 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.983748 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.983782 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.983895 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.983945 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.983971 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.983995 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.984042 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.986924 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.986956 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.986977 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.987230 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.987253 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.987261 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.987271 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.987277 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.987283 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.991575 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.991601 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.991618 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.991975 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.992003 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.992020 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.992030 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.992060 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.992069 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.996102 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.996127 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.996150 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:28.996619 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:28.996646 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:28.996678 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:28.996692 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:28.996716 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:28.996727 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:29.000440 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:29.000462 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:29.000481 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:29.000883 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:29.000918 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:29.000933 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:29.000951 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:29.000961 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:29.000977 3675 log.go:172] (0xc000607cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:29.004916 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:29.004936 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:29.004953 3675 log.go:172] (0xc000607360) (3) Data frame sent\nI0505 00:58:29.005837 3675 log.go:172] (0xc00003ad10) Data frame received for 5\nI0505 00:58:29.005883 3675 log.go:172] (0xc000607cc0) (5) Data frame handling\nI0505 00:58:29.005904 3675 log.go:172] (0xc00003ad10) Data frame received for 3\nI0505 00:58:29.005917 3675 log.go:172] (0xc000607360) (3) Data frame handling\nI0505 00:58:29.007680 3675 log.go:172] (0xc00003ad10) Data frame received for 1\nI0505 00:58:29.007703 3675 log.go:172] (0xc000606820) (1) Data frame handling\nI0505 00:58:29.007733 3675 log.go:172] (0xc000606820) (1) Data frame sent\nI0505 00:58:29.007752 3675 log.go:172] (0xc00003ad10) (0xc000606820) Stream removed, broadcasting: 1\nI0505 00:58:29.007774 3675 log.go:172] (0xc00003ad10) Go away received\nI0505 00:58:29.008187 3675 log.go:172] (0xc00003ad10) (0xc000606820) Stream removed, broadcasting: 1\nI0505 00:58:29.008208 3675 log.go:172] (0xc00003ad10) (0xc000607360) Stream removed, broadcasting: 3\nI0505 00:58:29.008235 3675 log.go:172] (0xc00003ad10) (0xc000607cc0) Stream removed, broadcasting: 5\n" May 5 00:58:29.013: INFO: stdout: "\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw\naffinity-nodeport-timeout-rtkrw" May 5 00:58:29.013: INFO: Received response from host: May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.013: INFO: Received response from host: affinity-nodeport-timeout-rtkrw May 5 00:58:29.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30285/' May 5 00:58:29.218: INFO: stderr: "I0505 00:58:29.145244 3698 log.go:172] (0xc000aca000) (0xc000550320) Create stream\nI0505 00:58:29.145306 3698 log.go:172] (0xc000aca000) (0xc000550320) Stream added, broadcasting: 1\nI0505 00:58:29.147995 3698 log.go:172] (0xc000aca000) Reply frame received for 1\nI0505 00:58:29.148035 3698 log.go:172] (0xc000aca000) (0xc000550780) Create stream\nI0505 00:58:29.148043 3698 log.go:172] (0xc000aca000) (0xc000550780) Stream added, broadcasting: 3\nI0505 00:58:29.149054 3698 log.go:172] (0xc000aca000) Reply frame received for 3\nI0505 00:58:29.149439 3698 log.go:172] (0xc000aca000) (0xc000551040) Create stream\nI0505 00:58:29.149466 3698 log.go:172] (0xc000aca000) (0xc000551040) Stream added, broadcasting: 5\nI0505 00:58:29.150400 3698 log.go:172] (0xc000aca000) Reply frame received for 5\nI0505 00:58:29.203136 3698 log.go:172] (0xc000aca000) Data frame received for 5\nI0505 00:58:29.203182 3698 log.go:172] (0xc000551040) (5) Data frame handling\nI0505 00:58:29.203213 3698 log.go:172] (0xc000551040) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:29.210042 3698 log.go:172] (0xc000aca000) Data frame received for 3\nI0505 00:58:29.210079 3698 log.go:172] (0xc000550780) (3) Data frame handling\nI0505 00:58:29.210104 3698 log.go:172] (0xc000550780) (3) Data frame sent\nI0505 00:58:29.210777 3698 log.go:172] (0xc000aca000) Data frame received for 3\nI0505 00:58:29.210804 3698 log.go:172] (0xc000550780) (3) Data frame handling\nI0505 00:58:29.210820 3698 log.go:172] (0xc000aca000) Data frame received for 5\nI0505 00:58:29.210827 3698 log.go:172] (0xc000551040) (5) Data frame handling\nI0505 00:58:29.212116 3698 log.go:172] (0xc000aca000) Data frame received for 1\nI0505 00:58:29.212137 3698 log.go:172] (0xc000550320) (1) Data frame handling\nI0505 00:58:29.212152 3698 log.go:172] (0xc000550320) (1) Data frame sent\nI0505 00:58:29.212161 3698 log.go:172] (0xc000aca000) (0xc000550320) Stream removed, broadcasting: 1\nI0505 00:58:29.212383 3698 log.go:172] (0xc000aca000) Go away received\nI0505 00:58:29.212473 3698 log.go:172] (0xc000aca000) (0xc000550320) Stream removed, broadcasting: 1\nI0505 00:58:29.212493 3698 log.go:172] (0xc000aca000) (0xc000550780) Stream removed, broadcasting: 3\nI0505 00:58:29.212505 3698 log.go:172] (0xc000aca000) (0xc000551040) Stream removed, broadcasting: 5\n" May 5 00:58:29.218: INFO: stdout: "affinity-nodeport-timeout-rtkrw" May 5 00:58:44.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8894 execpod-affinityg24rg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30285/' May 5 00:58:44.462: INFO: stderr: "I0505 00:58:44.359239 3717 log.go:172] (0xc0000e8840) (0xc0006c55e0) Create stream\nI0505 00:58:44.359310 3717 log.go:172] (0xc0000e8840) (0xc0006c55e0) Stream added, broadcasting: 1\nI0505 00:58:44.362211 3717 log.go:172] (0xc0000e8840) Reply frame received for 1\nI0505 00:58:44.362266 3717 log.go:172] (0xc0000e8840) (0xc00054a640) Create stream\nI0505 00:58:44.362282 3717 log.go:172] (0xc0000e8840) (0xc00054a640) Stream added, broadcasting: 3\nI0505 00:58:44.363180 3717 log.go:172] (0xc0000e8840) Reply frame received for 3\nI0505 00:58:44.363227 3717 log.go:172] (0xc0000e8840) (0xc0004c6e60) Create stream\nI0505 00:58:44.363247 3717 log.go:172] (0xc0000e8840) (0xc0004c6e60) Stream added, broadcasting: 5\nI0505 00:58:44.364033 3717 log.go:172] (0xc0000e8840) Reply frame received for 5\nI0505 00:58:44.449863 3717 log.go:172] (0xc0000e8840) Data frame received for 5\nI0505 00:58:44.449907 3717 log.go:172] (0xc0004c6e60) (5) Data frame handling\nI0505 00:58:44.449936 3717 log.go:172] (0xc0004c6e60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30285/\nI0505 00:58:44.454583 3717 log.go:172] (0xc0000e8840) Data frame received for 3\nI0505 00:58:44.454604 3717 log.go:172] (0xc00054a640) (3) Data frame handling\nI0505 00:58:44.454630 3717 log.go:172] (0xc00054a640) (3) Data frame sent\nI0505 00:58:44.455750 3717 log.go:172] (0xc0000e8840) Data frame received for 5\nI0505 00:58:44.455782 3717 log.go:172] (0xc0004c6e60) (5) Data frame handling\nI0505 00:58:44.455823 3717 log.go:172] (0xc0000e8840) Data frame received for 3\nI0505 00:58:44.455855 3717 log.go:172] (0xc00054a640) (3) Data frame handling\nI0505 00:58:44.457581 3717 log.go:172] (0xc0000e8840) Data frame received for 1\nI0505 00:58:44.457602 3717 log.go:172] (0xc0006c55e0) (1) Data frame handling\nI0505 00:58:44.457617 3717 log.go:172] (0xc0006c55e0) (1) Data frame sent\nI0505 00:58:44.457639 3717 log.go:172] (0xc0000e8840) (0xc0006c55e0) Stream removed, broadcasting: 1\nI0505 00:58:44.457663 3717 log.go:172] (0xc0000e8840) Go away received\nI0505 00:58:44.458036 3717 log.go:172] (0xc0000e8840) (0xc0006c55e0) Stream removed, broadcasting: 1\nI0505 00:58:44.458058 3717 log.go:172] (0xc0000e8840) (0xc00054a640) Stream removed, broadcasting: 3\nI0505 00:58:44.458068 3717 log.go:172] (0xc0000e8840) (0xc0004c6e60) Stream removed, broadcasting: 5\n" May 5 00:58:44.462: INFO: stdout: "affinity-nodeport-timeout-kbmsk" May 5 00:58:44.462: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8894, will wait for the garbage collector to delete the pods May 5 00:58:44.600: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.98136ms May 5 00:58:45.001: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.423ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:58:54.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8894" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:54.891 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":254,"skipped":4347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:58:54.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0505 00:59:05.616849 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 00:59:05.616: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:59:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4224" for this suite. • [SLOW TEST:10.629 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":255,"skipped":4373,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:59:05.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-6c17d474-b3d1-43f2-8e7b-1e2cef2f20ee STEP: Creating a pod to test consume configMaps May 5 00:59:05.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c" in namespace "configmap-6646" to be "Succeeded or Failed" May 5 00:59:05.791: INFO: Pod "pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412012ms May 5 00:59:11.968: INFO: Pod "pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181032691s May 5 00:59:13.972: INFO: Pod "pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185074081s STEP: Saw pod success May 5 00:59:13.972: INFO: Pod "pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c" satisfied condition "Succeeded or Failed" May 5 00:59:13.975: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c container configmap-volume-test: STEP: delete the pod May 5 00:59:14.102: INFO: Waiting for pod pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c to disappear May 5 00:59:14.129: INFO: Pod pod-configmaps-26aafef6-e326-4a34-9c6f-ef2b98b9db4c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 00:59:14.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6646" for this suite. • [SLOW TEST:8.539 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 00:59:14.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2c5492bd-f888-42cb-a6f6-a165e39c160f STEP: Creating configMap with name cm-test-opt-upd-0f7c8ca8-bc2f-4841-a836-1b754b537468 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2c5492bd-f888-42cb-a6f6-a165e39c160f STEP: Updating configmap cm-test-opt-upd-0f7c8ca8-bc2f-4841-a836-1b754b537468 STEP: Creating configMap with name cm-test-opt-create-6196b6a2-6f24-4822-be91-b7f12705c004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:00:54.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2520" for this suite. • [SLOW TEST:99.906 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4399,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:00:54.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-2235 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2235 STEP: Deleting pre-stop pod May 5 01:01:07.304: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:07.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2235" for this suite. • [SLOW TEST:13.285 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":258,"skipped":4402,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:07.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 5 01:01:07.427: INFO: Waiting up to 5m0s for pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6" in namespace "emptydir-1179" to be "Succeeded or Failed" May 5 01:01:07.512: INFO: Pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6": Phase="Pending", Reason="", readiness=false. Elapsed: 84.887443ms May 5 01:01:09.516: INFO: Pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089428686s May 5 01:01:11.521: INFO: Pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6": Phase="Running", Reason="", readiness=true. Elapsed: 4.093760555s May 5 01:01:13.525: INFO: Pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097966975s STEP: Saw pod success May 5 01:01:13.525: INFO: Pod "pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6" satisfied condition "Succeeded or Failed" May 5 01:01:13.528: INFO: Trying to get logs from node latest-worker pod pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6 container test-container: STEP: delete the pod May 5 01:01:13.565: INFO: Waiting for pod pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6 to disappear May 5 01:01:13.580: INFO: Pod pod-9deaa72e-ae7d-4a62-83e0-82e35fea79a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:13.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1179" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4404,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:13.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 01:01:23.759: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:23.786: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:25.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:25.790: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:27.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:27.791: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:29.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:30.798: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:31.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:32.622: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:33.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:33.791: INFO: Pod pod-with-prestop-exec-hook still exists May 5 01:01:35.786: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 01:01:35.790: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6861" for this suite. • [SLOW TEST:22.218 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4410,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:35.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 01:01:35.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a" in namespace "downward-api-4510" to be "Succeeded or Failed" May 5 01:01:35.924: INFO: Pod "downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.364166ms May 5 01:01:37.928: INFO: Pod "downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021592214s May 5 01:01:39.932: INFO: Pod "downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024930259s STEP: Saw pod success May 5 01:01:39.932: INFO: Pod "downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a" satisfied condition "Succeeded or Failed" May 5 01:01:39.935: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a container client-container: STEP: delete the pod May 5 01:01:39.969: INFO: Waiting for pod downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a to disappear May 5 01:01:39.982: INFO: Pod downwardapi-volume-b32953ac-ea2b-4498-9555-fc5de642d74a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:39.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4510" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:39.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 5 01:01:46.608: INFO: 10 pods remaining May 5 01:01:46.608: INFO: 0 pods has nil DeletionTimestamp May 5 01:01:46.608: INFO: May 5 01:01:48.281: INFO: 0 pods remaining May 5 01:01:48.281: INFO: 0 pods has nil DeletionTimestamp May 5 01:01:48.281: INFO: STEP: Gathering metrics W0505 01:01:49.173856 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 01:01:49.173: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8084" for this suite. • [SLOW TEST:9.848 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":262,"skipped":4457,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:49.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 5 01:01:50.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 5 01:01:50.894: INFO: stderr: "" May 5 01:01:50.894: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:50.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6357" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":263,"skipped":4465,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:50.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 5 01:01:55.446: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4557 PodName:pod-sharedvolume-b2050bce-0f1c-4977-a692-04702d9323e0 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:01:55.446: INFO: >>> kubeConfig: /root/.kube/config I0505 01:01:55.478459 7 log.go:172] (0xc002b79c30) (0xc002cc8fa0) Create stream I0505 01:01:55.478484 7 log.go:172] (0xc002b79c30) (0xc002cc8fa0) Stream added, broadcasting: 1 I0505 01:01:55.480238 7 log.go:172] (0xc002b79c30) Reply frame received for 1 I0505 01:01:55.480286 7 log.go:172] (0xc002b79c30) (0xc002cc90e0) Create stream I0505 01:01:55.480300 7 log.go:172] (0xc002b79c30) (0xc002cc90e0) Stream added, broadcasting: 3 I0505 01:01:55.481384 7 log.go:172] (0xc002b79c30) Reply frame received for 3 I0505 01:01:55.481429 7 log.go:172] (0xc002b79c30) (0xc000b0caa0) Create stream I0505 01:01:55.481446 7 log.go:172] (0xc002b79c30) (0xc000b0caa0) Stream added, broadcasting: 5 I0505 01:01:55.482410 7 log.go:172] (0xc002b79c30) Reply frame received for 5 I0505 01:01:55.549570 7 log.go:172] (0xc002b79c30) Data frame received for 3 I0505 01:01:55.549611 7 log.go:172] (0xc002cc90e0) (3) Data frame handling I0505 01:01:55.549624 7 log.go:172] (0xc002cc90e0) (3) Data frame sent I0505 01:01:55.549634 7 log.go:172] (0xc002b79c30) Data frame received for 3 I0505 01:01:55.549647 7 log.go:172] (0xc002cc90e0) (3) Data frame handling I0505 01:01:55.550050 7 log.go:172] (0xc002b79c30) Data frame received for 5 I0505 01:01:55.550086 7 log.go:172] (0xc000b0caa0) (5) Data frame handling I0505 01:01:55.551710 7 log.go:172] (0xc002b79c30) Data frame received for 1 I0505 01:01:55.551726 7 log.go:172] (0xc002cc8fa0) (1) Data frame handling I0505 01:01:55.551745 7 log.go:172] (0xc002cc8fa0) (1) Data frame sent I0505 01:01:55.551820 7 log.go:172] (0xc002b79c30) (0xc002cc8fa0) Stream removed, broadcasting: 1 I0505 01:01:55.551934 7 log.go:172] (0xc002b79c30) (0xc002cc8fa0) Stream removed, broadcasting: 1 I0505 01:01:55.551946 7 log.go:172] (0xc002b79c30) (0xc002cc90e0) Stream removed, broadcasting: 3 I0505 01:01:55.552056 7 log.go:172] (0xc002b79c30) Go away received I0505 01:01:55.552107 7 log.go:172] (0xc002b79c30) (0xc000b0caa0) Stream removed, broadcasting: 5 May 5 01:01:55.552: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:01:55.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":264,"skipped":4471,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:01:55.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4996 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 01:01:55.731: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 5 01:01:56.009: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 01:01:58.052: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 5 01:02:00.014: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:02.014: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:04.014: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:06.013: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:08.014: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:10.016: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:12.014: INFO: The status of Pod netserver-0 is Running (Ready = false) May 5 01:02:14.014: INFO: The status of Pod netserver-0 is Running (Ready = true) May 5 01:02:14.020: INFO: The status of Pod netserver-1 is Running (Ready = false) May 5 01:02:16.024: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 5 01:02:22.092: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.193 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4996 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:22.092: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:22.124072 7 log.go:172] (0xc002e16370) (0xc000c1aa00) Create stream I0505 01:02:22.124114 7 log.go:172] (0xc002e16370) (0xc000c1aa00) Stream added, broadcasting: 1 I0505 01:02:22.131272 7 log.go:172] (0xc002e16370) Reply frame received for 1 I0505 01:02:22.131331 7 log.go:172] (0xc002e16370) (0xc000f16be0) Create stream I0505 01:02:22.131347 7 log.go:172] (0xc002e16370) (0xc000f16be0) Stream added, broadcasting: 3 I0505 01:02:22.132196 7 log.go:172] (0xc002e16370) Reply frame received for 3 I0505 01:02:22.132242 7 log.go:172] (0xc002e16370) (0xc000c1ab40) Create stream I0505 01:02:22.132258 7 log.go:172] (0xc002e16370) (0xc000c1ab40) Stream added, broadcasting: 5 I0505 01:02:22.133108 7 log.go:172] (0xc002e16370) Reply frame received for 5 I0505 01:02:23.214552 7 log.go:172] (0xc002e16370) Data frame received for 3 I0505 01:02:23.214589 7 log.go:172] (0xc000f16be0) (3) Data frame handling I0505 01:02:23.214606 7 log.go:172] (0xc000f16be0) (3) Data frame sent I0505 01:02:23.214616 7 log.go:172] (0xc002e16370) Data frame received for 3 I0505 01:02:23.214625 7 log.go:172] (0xc000f16be0) (3) Data frame handling I0505 01:02:23.215338 7 log.go:172] (0xc002e16370) Data frame received for 5 I0505 01:02:23.215364 7 log.go:172] (0xc000c1ab40) (5) Data frame handling I0505 01:02:23.216909 7 log.go:172] (0xc002e16370) Data frame received for 1 I0505 01:02:23.216927 7 log.go:172] (0xc000c1aa00) (1) Data frame handling I0505 01:02:23.216946 7 log.go:172] (0xc000c1aa00) (1) Data frame sent I0505 01:02:23.216963 7 log.go:172] (0xc002e16370) (0xc000c1aa00) Stream removed, broadcasting: 1 I0505 01:02:23.216998 7 log.go:172] (0xc002e16370) Go away received I0505 01:02:23.217042 7 log.go:172] (0xc002e16370) (0xc000c1aa00) Stream removed, broadcasting: 1 I0505 01:02:23.217060 7 log.go:172] (0xc002e16370) (0xc000f16be0) Stream removed, broadcasting: 3 I0505 01:02:23.217071 7 log.go:172] (0xc002e16370) (0xc000c1ab40) Stream removed, broadcasting: 5 May 5 01:02:23.217: INFO: Found all expected endpoints: [netserver-0] May 5 01:02:23.220: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.101 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4996 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:23.220: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:23.253638 7 log.go:172] (0xc002e169a0) (0xc000c1b860) Create stream I0505 01:02:23.253673 7 log.go:172] (0xc002e169a0) (0xc000c1b860) Stream added, broadcasting: 1 I0505 01:02:23.256038 7 log.go:172] (0xc002e169a0) Reply frame received for 1 I0505 01:02:23.256080 7 log.go:172] (0xc002e169a0) (0xc000f16dc0) Create stream I0505 01:02:23.256097 7 log.go:172] (0xc002e169a0) (0xc000f16dc0) Stream added, broadcasting: 3 I0505 01:02:23.257101 7 log.go:172] (0xc002e169a0) Reply frame received for 3 I0505 01:02:23.257342 7 log.go:172] (0xc002e169a0) (0xc000f16e60) Create stream I0505 01:02:23.257363 7 log.go:172] (0xc002e169a0) (0xc000f16e60) Stream added, broadcasting: 5 I0505 01:02:23.258240 7 log.go:172] (0xc002e169a0) Reply frame received for 5 I0505 01:02:24.344703 7 log.go:172] (0xc002e169a0) Data frame received for 5 I0505 01:02:24.344730 7 log.go:172] (0xc000f16e60) (5) Data frame handling I0505 01:02:24.344757 7 log.go:172] (0xc002e169a0) Data frame received for 3 I0505 01:02:24.344781 7 log.go:172] (0xc000f16dc0) (3) Data frame handling I0505 01:02:24.344801 7 log.go:172] (0xc000f16dc0) (3) Data frame sent I0505 01:02:24.344813 7 log.go:172] (0xc002e169a0) Data frame received for 3 I0505 01:02:24.344823 7 log.go:172] (0xc000f16dc0) (3) Data frame handling I0505 01:02:24.345582 7 log.go:172] (0xc002e169a0) Data frame received for 1 I0505 01:02:24.345592 7 log.go:172] (0xc000c1b860) (1) Data frame handling I0505 01:02:24.345598 7 log.go:172] (0xc000c1b860) (1) Data frame sent I0505 01:02:24.345605 7 log.go:172] (0xc002e169a0) (0xc000c1b860) Stream removed, broadcasting: 1 I0505 01:02:24.345681 7 log.go:172] (0xc002e169a0) (0xc000c1b860) Stream removed, broadcasting: 1 I0505 01:02:24.345698 7 log.go:172] (0xc002e169a0) (0xc000f16dc0) Stream removed, broadcasting: 3 I0505 01:02:24.345707 7 log.go:172] (0xc002e169a0) (0xc000f16e60) Stream removed, broadcasting: 5 May 5 01:02:24.345: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:02:24.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0505 01:02:24.345791 7 log.go:172] (0xc002e169a0) Go away received STEP: Destroying namespace "pod-network-test-4996" for this suite. • [SLOW TEST:28.789 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4477,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:02:24.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:02:24.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9702" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":266,"skipped":4495,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:02:24.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 5 01:02:36.292: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.292: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.323864 7 log.go:172] (0xc002b79290) (0xc0010b2000) Create stream I0505 01:02:36.323886 7 log.go:172] (0xc002b79290) (0xc0010b2000) Stream added, broadcasting: 1 I0505 01:02:36.325790 7 log.go:172] (0xc002b79290) Reply frame received for 1 I0505 01:02:36.325816 7 log.go:172] (0xc002b79290) (0xc001f27cc0) Create stream I0505 01:02:36.325825 7 log.go:172] (0xc002b79290) (0xc001f27cc0) Stream added, broadcasting: 3 I0505 01:02:36.326633 7 log.go:172] (0xc002b79290) Reply frame received for 3 I0505 01:02:36.326667 7 log.go:172] (0xc002b79290) (0xc0011eb220) Create stream I0505 01:02:36.326679 7 log.go:172] (0xc002b79290) (0xc0011eb220) Stream added, broadcasting: 5 I0505 01:02:36.327530 7 log.go:172] (0xc002b79290) Reply frame received for 5 I0505 01:02:36.413942 7 log.go:172] (0xc002b79290) Data frame received for 3 I0505 01:02:36.413983 7 log.go:172] (0xc001f27cc0) (3) Data frame handling I0505 01:02:36.414004 7 log.go:172] (0xc001f27cc0) (3) Data frame sent I0505 01:02:36.414033 7 log.go:172] (0xc002b79290) Data frame received for 3 I0505 01:02:36.414107 7 log.go:172] (0xc001f27cc0) (3) Data frame handling I0505 01:02:36.414171 7 log.go:172] (0xc002b79290) Data frame received for 5 I0505 01:02:36.414241 7 log.go:172] (0xc0011eb220) (5) Data frame handling I0505 01:02:36.415179 7 log.go:172] (0xc002b79290) Data frame received for 1 I0505 01:02:36.415203 7 log.go:172] (0xc0010b2000) (1) Data frame handling I0505 01:02:36.415215 7 log.go:172] (0xc0010b2000) (1) Data frame sent I0505 01:02:36.415238 7 log.go:172] (0xc002b79290) (0xc0010b2000) Stream removed, broadcasting: 1 I0505 01:02:36.415360 7 log.go:172] (0xc002b79290) (0xc0010b2000) Stream removed, broadcasting: 1 I0505 01:02:36.415412 7 log.go:172] (0xc002b79290) (0xc001f27cc0) Stream removed, broadcasting: 3 I0505 01:02:36.415454 7 log.go:172] (0xc002b79290) (0xc0011eb220) Stream removed, broadcasting: 5 May 5 01:02:36.415: INFO: Exec stderr: "" I0505 01:02:36.415506 7 log.go:172] (0xc002b79290) Go away received May 5 01:02:36.415: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.415: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.450441 7 log.go:172] (0xc002e17340) (0xc001f27ea0) Create stream I0505 01:02:36.450464 7 log.go:172] (0xc002e17340) (0xc001f27ea0) Stream added, broadcasting: 1 I0505 01:02:36.452278 7 log.go:172] (0xc002e17340) Reply frame received for 1 I0505 01:02:36.452321 7 log.go:172] (0xc002e17340) (0xc0012b8000) Create stream I0505 01:02:36.452329 7 log.go:172] (0xc002e17340) (0xc0012b8000) Stream added, broadcasting: 3 I0505 01:02:36.453613 7 log.go:172] (0xc002e17340) Reply frame received for 3 I0505 01:02:36.453683 7 log.go:172] (0xc002e17340) (0xc0005be500) Create stream I0505 01:02:36.453714 7 log.go:172] (0xc002e17340) (0xc0005be500) Stream added, broadcasting: 5 I0505 01:02:36.454926 7 log.go:172] (0xc002e17340) Reply frame received for 5 I0505 01:02:36.528212 7 log.go:172] (0xc002e17340) Data frame received for 5 I0505 01:02:36.528260 7 log.go:172] (0xc0005be500) (5) Data frame handling I0505 01:02:36.528287 7 log.go:172] (0xc002e17340) Data frame received for 3 I0505 01:02:36.528301 7 log.go:172] (0xc0012b8000) (3) Data frame handling I0505 01:02:36.528324 7 log.go:172] (0xc0012b8000) (3) Data frame sent I0505 01:02:36.528345 7 log.go:172] (0xc002e17340) Data frame received for 3 I0505 01:02:36.528363 7 log.go:172] (0xc0012b8000) (3) Data frame handling I0505 01:02:36.530383 7 log.go:172] (0xc002e17340) Data frame received for 1 I0505 01:02:36.530424 7 log.go:172] (0xc001f27ea0) (1) Data frame handling I0505 01:02:36.530466 7 log.go:172] (0xc001f27ea0) (1) Data frame sent I0505 01:02:36.530491 7 log.go:172] (0xc002e17340) (0xc001f27ea0) Stream removed, broadcasting: 1 I0505 01:02:36.530516 7 log.go:172] (0xc002e17340) Go away received I0505 01:02:36.530610 7 log.go:172] (0xc002e17340) (0xc001f27ea0) Stream removed, broadcasting: 1 I0505 01:02:36.530645 7 log.go:172] (0xc002e17340) (0xc0012b8000) Stream removed, broadcasting: 3 I0505 01:02:36.530658 7 log.go:172] (0xc002e17340) (0xc0005be500) Stream removed, broadcasting: 5 May 5 01:02:36.530: INFO: Exec stderr: "" May 5 01:02:36.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.530: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.569014 7 log.go:172] (0xc002a9c790) (0xc002ad1860) Create stream I0505 01:02:36.569044 7 log.go:172] (0xc002a9c790) (0xc002ad1860) Stream added, broadcasting: 1 I0505 01:02:36.571069 7 log.go:172] (0xc002a9c790) Reply frame received for 1 I0505 01:02:36.571110 7 log.go:172] (0xc002a9c790) (0xc0011eb2c0) Create stream I0505 01:02:36.571125 7 log.go:172] (0xc002a9c790) (0xc0011eb2c0) Stream added, broadcasting: 3 I0505 01:02:36.572161 7 log.go:172] (0xc002a9c790) Reply frame received for 3 I0505 01:02:36.572204 7 log.go:172] (0xc002a9c790) (0xc0005bf680) Create stream I0505 01:02:36.572222 7 log.go:172] (0xc002a9c790) (0xc0005bf680) Stream added, broadcasting: 5 I0505 01:02:36.573452 7 log.go:172] (0xc002a9c790) Reply frame received for 5 I0505 01:02:36.638024 7 log.go:172] (0xc002a9c790) Data frame received for 5 I0505 01:02:36.638062 7 log.go:172] (0xc0005bf680) (5) Data frame handling I0505 01:02:36.638087 7 log.go:172] (0xc002a9c790) Data frame received for 3 I0505 01:02:36.638107 7 log.go:172] (0xc0011eb2c0) (3) Data frame handling I0505 01:02:36.638161 7 log.go:172] (0xc0011eb2c0) (3) Data frame sent I0505 01:02:36.638179 7 log.go:172] (0xc002a9c790) Data frame received for 3 I0505 01:02:36.638191 7 log.go:172] (0xc0011eb2c0) (3) Data frame handling I0505 01:02:36.639223 7 log.go:172] (0xc002a9c790) Data frame received for 1 I0505 01:02:36.639245 7 log.go:172] (0xc002ad1860) (1) Data frame handling I0505 01:02:36.639258 7 log.go:172] (0xc002ad1860) (1) Data frame sent I0505 01:02:36.639273 7 log.go:172] (0xc002a9c790) (0xc002ad1860) Stream removed, broadcasting: 1 I0505 01:02:36.639369 7 log.go:172] (0xc002a9c790) (0xc002ad1860) Stream removed, broadcasting: 1 I0505 01:02:36.639391 7 log.go:172] (0xc002a9c790) (0xc0011eb2c0) Stream removed, broadcasting: 3 I0505 01:02:36.639406 7 log.go:172] (0xc002a9c790) (0xc0005bf680) Stream removed, broadcasting: 5 I0505 01:02:36.639419 7 log.go:172] (0xc002a9c790) Go away received May 5 01:02:36.639: INFO: Exec stderr: "" May 5 01:02:36.639: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.639: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.693991 7 log.go:172] (0xc0028b29a0) (0xc0011ebea0) Create stream I0505 01:02:36.694026 7 log.go:172] (0xc0028b29a0) (0xc0011ebea0) Stream added, broadcasting: 1 I0505 01:02:36.695965 7 log.go:172] (0xc0028b29a0) Reply frame received for 1 I0505 01:02:36.696008 7 log.go:172] (0xc0028b29a0) (0xc00066dea0) Create stream I0505 01:02:36.696027 7 log.go:172] (0xc0028b29a0) (0xc00066dea0) Stream added, broadcasting: 3 I0505 01:02:36.696957 7 log.go:172] (0xc0028b29a0) Reply frame received for 3 I0505 01:02:36.696996 7 log.go:172] (0xc0028b29a0) (0xc0012b81e0) Create stream I0505 01:02:36.697012 7 log.go:172] (0xc0028b29a0) (0xc0012b81e0) Stream added, broadcasting: 5 I0505 01:02:36.698218 7 log.go:172] (0xc0028b29a0) Reply frame received for 5 I0505 01:02:36.749423 7 log.go:172] (0xc0028b29a0) Data frame received for 5 I0505 01:02:36.749446 7 log.go:172] (0xc0012b81e0) (5) Data frame handling I0505 01:02:36.749461 7 log.go:172] (0xc0028b29a0) Data frame received for 3 I0505 01:02:36.749466 7 log.go:172] (0xc00066dea0) (3) Data frame handling I0505 01:02:36.749475 7 log.go:172] (0xc00066dea0) (3) Data frame sent I0505 01:02:36.749481 7 log.go:172] (0xc0028b29a0) Data frame received for 3 I0505 01:02:36.749486 7 log.go:172] (0xc00066dea0) (3) Data frame handling I0505 01:02:36.750322 7 log.go:172] (0xc0028b29a0) Data frame received for 1 I0505 01:02:36.750337 7 log.go:172] (0xc0011ebea0) (1) Data frame handling I0505 01:02:36.750359 7 log.go:172] (0xc0011ebea0) (1) Data frame sent I0505 01:02:36.750373 7 log.go:172] (0xc0028b29a0) (0xc0011ebea0) Stream removed, broadcasting: 1 I0505 01:02:36.750456 7 log.go:172] (0xc0028b29a0) (0xc0011ebea0) Stream removed, broadcasting: 1 I0505 01:02:36.750592 7 log.go:172] (0xc0028b29a0) (0xc00066dea0) Stream removed, broadcasting: 3 I0505 01:02:36.750693 7 log.go:172] (0xc0028b29a0) Go away received I0505 01:02:36.750754 7 log.go:172] (0xc0028b29a0) (0xc0012b81e0) Stream removed, broadcasting: 5 May 5 01:02:36.750: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 5 01:02:36.750: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.750: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.778158 7 log.go:172] (0xc002b79970) (0xc0010b2fa0) Create stream I0505 01:02:36.778188 7 log.go:172] (0xc002b79970) (0xc0010b2fa0) Stream added, broadcasting: 1 I0505 01:02:36.780038 7 log.go:172] (0xc002b79970) Reply frame received for 1 I0505 01:02:36.780089 7 log.go:172] (0xc002b79970) (0xc0010b3040) Create stream I0505 01:02:36.780106 7 log.go:172] (0xc002b79970) (0xc0010b3040) Stream added, broadcasting: 3 I0505 01:02:36.781348 7 log.go:172] (0xc002b79970) Reply frame received for 3 I0505 01:02:36.781387 7 log.go:172] (0xc002b79970) (0xc0012b8640) Create stream I0505 01:02:36.781401 7 log.go:172] (0xc002b79970) (0xc0012b8640) Stream added, broadcasting: 5 I0505 01:02:36.782331 7 log.go:172] (0xc002b79970) Reply frame received for 5 I0505 01:02:36.842402 7 log.go:172] (0xc002b79970) Data frame received for 5 I0505 01:02:36.842437 7 log.go:172] (0xc0012b8640) (5) Data frame handling I0505 01:02:36.842462 7 log.go:172] (0xc002b79970) Data frame received for 3 I0505 01:02:36.842477 7 log.go:172] (0xc0010b3040) (3) Data frame handling I0505 01:02:36.842492 7 log.go:172] (0xc0010b3040) (3) Data frame sent I0505 01:02:36.842506 7 log.go:172] (0xc002b79970) Data frame received for 3 I0505 01:02:36.842518 7 log.go:172] (0xc0010b3040) (3) Data frame handling I0505 01:02:36.848602 7 log.go:172] (0xc002b79970) Data frame received for 1 I0505 01:02:36.848643 7 log.go:172] (0xc0010b2fa0) (1) Data frame handling I0505 01:02:36.848681 7 log.go:172] (0xc0010b2fa0) (1) Data frame sent I0505 01:02:36.848708 7 log.go:172] (0xc002b79970) (0xc0010b2fa0) Stream removed, broadcasting: 1 I0505 01:02:36.848737 7 log.go:172] (0xc002b79970) Go away received I0505 01:02:36.848877 7 log.go:172] (0xc002b79970) (0xc0010b2fa0) Stream removed, broadcasting: 1 I0505 01:02:36.848905 7 log.go:172] (0xc002b79970) (0xc0010b3040) Stream removed, broadcasting: 3 I0505 01:02:36.848916 7 log.go:172] (0xc002b79970) (0xc0012b8640) Stream removed, broadcasting: 5 May 5 01:02:36.848: INFO: Exec stderr: "" May 5 01:02:36.848: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.848: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.871989 7 log.go:172] (0xc002a60000) (0xc0010b3c20) Create stream I0505 01:02:36.872014 7 log.go:172] (0xc002a60000) (0xc0010b3c20) Stream added, broadcasting: 1 I0505 01:02:36.873969 7 log.go:172] (0xc002a60000) Reply frame received for 1 I0505 01:02:36.874022 7 log.go:172] (0xc002a60000) (0xc00037c8c0) Create stream I0505 01:02:36.874036 7 log.go:172] (0xc002a60000) (0xc00037c8c0) Stream added, broadcasting: 3 I0505 01:02:36.874998 7 log.go:172] (0xc002a60000) Reply frame received for 3 I0505 01:02:36.875019 7 log.go:172] (0xc002a60000) (0xc0012b8780) Create stream I0505 01:02:36.875035 7 log.go:172] (0xc002a60000) (0xc0012b8780) Stream added, broadcasting: 5 I0505 01:02:36.875912 7 log.go:172] (0xc002a60000) Reply frame received for 5 I0505 01:02:36.934437 7 log.go:172] (0xc002a60000) Data frame received for 3 I0505 01:02:36.934466 7 log.go:172] (0xc00037c8c0) (3) Data frame handling I0505 01:02:36.934488 7 log.go:172] (0xc00037c8c0) (3) Data frame sent I0505 01:02:36.934500 7 log.go:172] (0xc002a60000) Data frame received for 3 I0505 01:02:36.934512 7 log.go:172] (0xc00037c8c0) (3) Data frame handling I0505 01:02:36.934671 7 log.go:172] (0xc002a60000) Data frame received for 5 I0505 01:02:36.934747 7 log.go:172] (0xc0012b8780) (5) Data frame handling I0505 01:02:36.936065 7 log.go:172] (0xc002a60000) Data frame received for 1 I0505 01:02:36.936099 7 log.go:172] (0xc0010b3c20) (1) Data frame handling I0505 01:02:36.936127 7 log.go:172] (0xc0010b3c20) (1) Data frame sent I0505 01:02:36.936152 7 log.go:172] (0xc002a60000) (0xc0010b3c20) Stream removed, broadcasting: 1 I0505 01:02:36.936178 7 log.go:172] (0xc002a60000) Go away received I0505 01:02:36.936373 7 log.go:172] (0xc002a60000) (0xc0010b3c20) Stream removed, broadcasting: 1 I0505 01:02:36.936415 7 log.go:172] (0xc002a60000) (0xc00037c8c0) Stream removed, broadcasting: 3 I0505 01:02:36.936440 7 log.go:172] (0xc002a60000) (0xc0012b8780) Stream removed, broadcasting: 5 May 5 01:02:36.936: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 5 01:02:36.936: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:36.936: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:36.972681 7 log.go:172] (0xc001a8e370) (0xc000ac2640) Create stream I0505 01:02:36.972714 7 log.go:172] (0xc001a8e370) (0xc000ac2640) Stream added, broadcasting: 1 I0505 01:02:36.974778 7 log.go:172] (0xc001a8e370) Reply frame received for 1 I0505 01:02:36.974811 7 log.go:172] (0xc001a8e370) (0xc000ac2c80) Create stream I0505 01:02:36.974824 7 log.go:172] (0xc001a8e370) (0xc000ac2c80) Stream added, broadcasting: 3 I0505 01:02:36.975809 7 log.go:172] (0xc001a8e370) Reply frame received for 3 I0505 01:02:36.975850 7 log.go:172] (0xc001a8e370) (0xc002ad1900) Create stream I0505 01:02:36.975864 7 log.go:172] (0xc001a8e370) (0xc002ad1900) Stream added, broadcasting: 5 I0505 01:02:36.976746 7 log.go:172] (0xc001a8e370) Reply frame received for 5 I0505 01:02:37.019567 7 log.go:172] (0xc001a8e370) Data frame received for 3 I0505 01:02:37.019596 7 log.go:172] (0xc000ac2c80) (3) Data frame handling I0505 01:02:37.019615 7 log.go:172] (0xc000ac2c80) (3) Data frame sent I0505 01:02:37.019643 7 log.go:172] (0xc001a8e370) Data frame received for 3 I0505 01:02:37.019662 7 log.go:172] (0xc000ac2c80) (3) Data frame handling I0505 01:02:37.019726 7 log.go:172] (0xc001a8e370) Data frame received for 5 I0505 01:02:37.019766 7 log.go:172] (0xc002ad1900) (5) Data frame handling I0505 01:02:37.021877 7 log.go:172] (0xc001a8e370) Data frame received for 1 I0505 01:02:37.021910 7 log.go:172] (0xc000ac2640) (1) Data frame handling I0505 01:02:37.021947 7 log.go:172] (0xc000ac2640) (1) Data frame sent I0505 01:02:37.021983 7 log.go:172] (0xc001a8e370) (0xc000ac2640) Stream removed, broadcasting: 1 I0505 01:02:37.022011 7 log.go:172] (0xc001a8e370) Go away received I0505 01:02:37.022110 7 log.go:172] (0xc001a8e370) (0xc000ac2640) Stream removed, broadcasting: 1 I0505 01:02:37.022145 7 log.go:172] (0xc001a8e370) (0xc000ac2c80) Stream removed, broadcasting: 3 I0505 01:02:37.022170 7 log.go:172] (0xc001a8e370) (0xc002ad1900) Stream removed, broadcasting: 5 May 5 01:02:37.022: INFO: Exec stderr: "" May 5 01:02:37.022: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:37.022: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:37.054637 7 log.go:172] (0xc002a9cf20) (0xc002ad1c20) Create stream I0505 01:02:37.054671 7 log.go:172] (0xc002a9cf20) (0xc002ad1c20) Stream added, broadcasting: 1 I0505 01:02:37.056309 7 log.go:172] (0xc002a9cf20) Reply frame received for 1 I0505 01:02:37.056335 7 log.go:172] (0xc002a9cf20) (0xc000c9e140) Create stream I0505 01:02:37.056343 7 log.go:172] (0xc002a9cf20) (0xc000c9e140) Stream added, broadcasting: 3 I0505 01:02:37.057741 7 log.go:172] (0xc002a9cf20) Reply frame received for 3 I0505 01:02:37.057813 7 log.go:172] (0xc002a9cf20) (0xc0012b8820) Create stream I0505 01:02:37.057850 7 log.go:172] (0xc002a9cf20) (0xc0012b8820) Stream added, broadcasting: 5 I0505 01:02:37.058943 7 log.go:172] (0xc002a9cf20) Reply frame received for 5 I0505 01:02:37.125871 7 log.go:172] (0xc002a9cf20) Data frame received for 3 I0505 01:02:37.125926 7 log.go:172] (0xc000c9e140) (3) Data frame handling I0505 01:02:37.125963 7 log.go:172] (0xc000c9e140) (3) Data frame sent I0505 01:02:37.126014 7 log.go:172] (0xc002a9cf20) Data frame received for 3 I0505 01:02:37.126027 7 log.go:172] (0xc000c9e140) (3) Data frame handling I0505 01:02:37.126084 7 log.go:172] (0xc002a9cf20) Data frame received for 5 I0505 01:02:37.126114 7 log.go:172] (0xc0012b8820) (5) Data frame handling I0505 01:02:37.128434 7 log.go:172] (0xc002a9cf20) Data frame received for 1 I0505 01:02:37.128457 7 log.go:172] (0xc002ad1c20) (1) Data frame handling I0505 01:02:37.128473 7 log.go:172] (0xc002ad1c20) (1) Data frame sent I0505 01:02:37.128486 7 log.go:172] (0xc002a9cf20) (0xc002ad1c20) Stream removed, broadcasting: 1 I0505 01:02:37.128515 7 log.go:172] (0xc002a9cf20) Go away received I0505 01:02:37.128686 7 log.go:172] (0xc002a9cf20) (0xc002ad1c20) Stream removed, broadcasting: 1 I0505 01:02:37.128718 7 log.go:172] (0xc002a9cf20) (0xc000c9e140) Stream removed, broadcasting: 3 I0505 01:02:37.128748 7 log.go:172] (0xc002a9cf20) (0xc0012b8820) Stream removed, broadcasting: 5 May 5 01:02:37.128: INFO: Exec stderr: "" May 5 01:02:37.128: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:37.128: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:37.165272 7 log.go:172] (0xc001a8e9a0) (0xc000ac3cc0) Create stream I0505 01:02:37.165312 7 log.go:172] (0xc001a8e9a0) (0xc000ac3cc0) Stream added, broadcasting: 1 I0505 01:02:37.168073 7 log.go:172] (0xc001a8e9a0) Reply frame received for 1 I0505 01:02:37.168120 7 log.go:172] (0xc001a8e9a0) (0xc0012b88c0) Create stream I0505 01:02:37.168141 7 log.go:172] (0xc001a8e9a0) (0xc0012b88c0) Stream added, broadcasting: 3 I0505 01:02:37.169004 7 log.go:172] (0xc001a8e9a0) Reply frame received for 3 I0505 01:02:37.169032 7 log.go:172] (0xc001a8e9a0) (0xc000d8c0a0) Create stream I0505 01:02:37.169045 7 log.go:172] (0xc001a8e9a0) (0xc000d8c0a0) Stream added, broadcasting: 5 I0505 01:02:37.170119 7 log.go:172] (0xc001a8e9a0) Reply frame received for 5 I0505 01:02:37.238763 7 log.go:172] (0xc001a8e9a0) Data frame received for 5 I0505 01:02:37.238801 7 log.go:172] (0xc000d8c0a0) (5) Data frame handling I0505 01:02:37.238840 7 log.go:172] (0xc001a8e9a0) Data frame received for 3 I0505 01:02:37.238855 7 log.go:172] (0xc0012b88c0) (3) Data frame handling I0505 01:02:37.238874 7 log.go:172] (0xc0012b88c0) (3) Data frame sent I0505 01:02:37.238887 7 log.go:172] (0xc001a8e9a0) Data frame received for 3 I0505 01:02:37.238897 7 log.go:172] (0xc0012b88c0) (3) Data frame handling I0505 01:02:37.240332 7 log.go:172] (0xc001a8e9a0) Data frame received for 1 I0505 01:02:37.240368 7 log.go:172] (0xc000ac3cc0) (1) Data frame handling I0505 01:02:37.240383 7 log.go:172] (0xc000ac3cc0) (1) Data frame sent I0505 01:02:37.240400 7 log.go:172] (0xc001a8e9a0) (0xc000ac3cc0) Stream removed, broadcasting: 1 I0505 01:02:37.240418 7 log.go:172] (0xc001a8e9a0) Go away received I0505 01:02:37.240530 7 log.go:172] (0xc001a8e9a0) (0xc000ac3cc0) Stream removed, broadcasting: 1 I0505 01:02:37.240547 7 log.go:172] (0xc001a8e9a0) (0xc0012b88c0) Stream removed, broadcasting: 3 I0505 01:02:37.240555 7 log.go:172] (0xc001a8e9a0) (0xc000d8c0a0) Stream removed, broadcasting: 5 May 5 01:02:37.240: INFO: Exec stderr: "" May 5 01:02:37.240: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2390 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 01:02:37.240: INFO: >>> kubeConfig: /root/.kube/config I0505 01:02:37.270265 7 log.go:172] (0xc0028b3080) (0xc000c9eb40) Create stream I0505 01:02:37.270292 7 log.go:172] (0xc0028b3080) (0xc000c9eb40) Stream added, broadcasting: 1 I0505 01:02:37.272051 7 log.go:172] (0xc0028b3080) Reply frame received for 1 I0505 01:02:37.272088 7 log.go:172] (0xc0028b3080) (0xc000ac3ea0) Create stream I0505 01:02:37.272097 7 log.go:172] (0xc0028b3080) (0xc000ac3ea0) Stream added, broadcasting: 3 I0505 01:02:37.273069 7 log.go:172] (0xc0028b3080) Reply frame received for 3 I0505 01:02:37.273346 7 log.go:172] (0xc0028b3080) (0xc000d8c140) Create stream I0505 01:02:37.273368 7 log.go:172] (0xc0028b3080) (0xc000d8c140) Stream added, broadcasting: 5 I0505 01:02:37.274451 7 log.go:172] (0xc0028b3080) Reply frame received for 5 I0505 01:02:37.335902 7 log.go:172] (0xc0028b3080) Data frame received for 5 I0505 01:02:37.335929 7 log.go:172] (0xc000d8c140) (5) Data frame handling I0505 01:02:37.336007 7 log.go:172] (0xc0028b3080) Data frame received for 3 I0505 01:02:37.336045 7 log.go:172] (0xc000ac3ea0) (3) Data frame handling I0505 01:02:37.336079 7 log.go:172] (0xc000ac3ea0) (3) Data frame sent I0505 01:02:37.336104 7 log.go:172] (0xc0028b3080) Data frame received for 3 I0505 01:02:37.336124 7 log.go:172] (0xc000ac3ea0) (3) Data frame handling I0505 01:02:37.337503 7 log.go:172] (0xc0028b3080) Data frame received for 1 I0505 01:02:37.337520 7 log.go:172] (0xc000c9eb40) (1) Data frame handling I0505 01:02:37.337531 7 log.go:172] (0xc000c9eb40) (1) Data frame sent I0505 01:02:37.337550 7 log.go:172] (0xc0028b3080) (0xc000c9eb40) Stream removed, broadcasting: 1 I0505 01:02:37.337624 7 log.go:172] (0xc0028b3080) (0xc000c9eb40) Stream removed, broadcasting: 1 I0505 01:02:37.337638 7 log.go:172] (0xc0028b3080) (0xc000ac3ea0) Stream removed, broadcasting: 3 I0505 01:02:37.337652 7 log.go:172] (0xc0028b3080) Go away received I0505 01:02:37.337761 7 log.go:172] (0xc0028b3080) (0xc000d8c140) Stream removed, broadcasting: 5 May 5 01:02:37.337: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:02:37.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2390" for this suite. • [SLOW TEST:12.841 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4497,"failed":0} S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:02:37.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 5 01:02:37.499: INFO: Waiting up to 5m0s for pod "downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852" in namespace "downward-api-7172" to be "Succeeded or Failed" May 5 01:02:37.555: INFO: Pod "downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852": Phase="Pending", Reason="", readiness=false. Elapsed: 55.638675ms May 5 01:02:39.608: INFO: Pod "downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108245686s May 5 01:02:41.793: INFO: Pod "downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293631408s STEP: Saw pod success May 5 01:02:41.793: INFO: Pod "downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852" satisfied condition "Succeeded or Failed" May 5 01:02:41.796: INFO: Trying to get logs from node latest-worker pod downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852 container dapi-container: STEP: delete the pod May 5 01:02:42.001: INFO: Waiting for pod downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852 to disappear May 5 01:02:42.056: INFO: Pod downward-api-97cabebb-3ba3-48ab-aac3-f6af9ebb9852 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:02:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7172" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4498,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:02:42.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-277d0ead-d0e7-4321-ab79-79f5440877a1 in namespace container-probe-7821 May 5 01:02:46.896: INFO: Started pod busybox-277d0ead-d0e7-4321-ab79-79f5440877a1 in namespace container-probe-7821 STEP: checking the pod's current state and verifying that restartCount is present May 5 01:02:46.900: INFO: Initial restart count of pod busybox-277d0ead-d0e7-4321-ab79-79f5440877a1 is 0 May 5 01:03:36.092: INFO: Restart count of pod container-probe-7821/busybox-277d0ead-d0e7-4321-ab79-79f5440877a1 is now 1 (49.192105818s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:03:36.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7821" for this suite. • [SLOW TEST:54.072 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4517,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:03:36.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 5 01:03:36.288: INFO: >>> kubeConfig: /root/.kube/config May 5 01:03:39.200: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:03:49.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9868" for this suite. • [SLOW TEST:13.747 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":270,"skipped":4517,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:03:49.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1765 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1765 I0505 01:03:50.130942 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1765, replica count: 2 I0505 01:03:53.181459 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 01:03:56.181681 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 01:03:56.181: INFO: Creating new exec pod May 5 01:04:01.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1765 execpodc75vk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 5 01:04:03.918: INFO: stderr: "I0505 01:04:03.817299 3754 log.go:172] (0xc00082ac60) (0xc000239720) Create stream\nI0505 01:04:03.817345 3754 log.go:172] (0xc00082ac60) (0xc000239720) Stream added, broadcasting: 1\nI0505 01:04:03.819980 3754 log.go:172] (0xc00082ac60) Reply frame received for 1\nI0505 01:04:03.820030 3754 log.go:172] (0xc00082ac60) (0xc0001377c0) Create stream\nI0505 01:04:03.820053 3754 log.go:172] (0xc00082ac60) (0xc0001377c0) Stream added, broadcasting: 3\nI0505 01:04:03.821011 3754 log.go:172] (0xc00082ac60) Reply frame received for 3\nI0505 01:04:03.821054 3754 log.go:172] (0xc00082ac60) (0xc000518500) Create stream\nI0505 01:04:03.821079 3754 log.go:172] (0xc00082ac60) (0xc000518500) Stream added, broadcasting: 5\nI0505 01:04:03.822303 3754 log.go:172] (0xc00082ac60) Reply frame received for 5\nI0505 01:04:03.910594 3754 log.go:172] (0xc00082ac60) Data frame received for 5\nI0505 01:04:03.910628 3754 log.go:172] (0xc000518500) (5) Data frame handling\nI0505 01:04:03.910647 3754 log.go:172] (0xc000518500) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0505 01:04:03.910844 3754 log.go:172] (0xc00082ac60) Data frame received for 3\nI0505 01:04:03.910871 3754 log.go:172] (0xc0001377c0) (3) Data frame handling\nI0505 01:04:03.911124 3754 log.go:172] (0xc00082ac60) Data frame received for 5\nI0505 01:04:03.911157 3754 log.go:172] (0xc000518500) (5) Data frame handling\nI0505 01:04:03.912606 3754 log.go:172] (0xc00082ac60) Data frame received for 1\nI0505 01:04:03.912623 3754 log.go:172] (0xc000239720) (1) Data frame handling\nI0505 01:04:03.912635 3754 log.go:172] (0xc000239720) (1) Data frame sent\nI0505 01:04:03.912646 3754 log.go:172] (0xc00082ac60) (0xc000239720) Stream removed, broadcasting: 1\nI0505 01:04:03.912840 3754 log.go:172] (0xc00082ac60) Go away received\nI0505 01:04:03.912914 3754 log.go:172] (0xc00082ac60) (0xc000239720) Stream removed, broadcasting: 1\nI0505 01:04:03.912927 3754 log.go:172] (0xc00082ac60) (0xc0001377c0) Stream removed, broadcasting: 3\nI0505 01:04:03.912934 3754 log.go:172] (0xc00082ac60) (0xc000518500) Stream removed, broadcasting: 5\n" May 5 01:04:03.918: INFO: stdout: "" May 5 01:04:03.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1765 execpodc75vk -- /bin/sh -x -c nc -zv -t -w 2 10.98.245.156 80' May 5 01:04:04.141: INFO: stderr: "I0505 01:04:04.060237 3788 log.go:172] (0xc000a85ce0) (0xc000864000) Create stream\nI0505 01:04:04.060300 3788 log.go:172] (0xc000a85ce0) (0xc000864000) Stream added, broadcasting: 1\nI0505 01:04:04.064664 3788 log.go:172] (0xc000a85ce0) Reply frame received for 1\nI0505 01:04:04.064709 3788 log.go:172] (0xc000a85ce0) (0xc00085f040) Create stream\nI0505 01:04:04.064730 3788 log.go:172] (0xc000a85ce0) (0xc00085f040) Stream added, broadcasting: 3\nI0505 01:04:04.065928 3788 log.go:172] (0xc000a85ce0) Reply frame received for 3\nI0505 01:04:04.065983 3788 log.go:172] (0xc000a85ce0) (0xc000760b40) Create stream\nI0505 01:04:04.066005 3788 log.go:172] (0xc000a85ce0) (0xc000760b40) Stream added, broadcasting: 5\nI0505 01:04:04.066871 3788 log.go:172] (0xc000a85ce0) Reply frame received for 5\nI0505 01:04:04.132605 3788 log.go:172] (0xc000a85ce0) Data frame received for 5\nI0505 01:04:04.132728 3788 log.go:172] (0xc000760b40) (5) Data frame handling\nI0505 01:04:04.132752 3788 log.go:172] (0xc000760b40) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.245.156 80\nConnection to 10.98.245.156 80 port [tcp/http] succeeded!\nI0505 01:04:04.132773 3788 log.go:172] (0xc000a85ce0) Data frame received for 3\nI0505 01:04:04.132824 3788 log.go:172] (0xc00085f040) (3) Data frame handling\nI0505 01:04:04.132847 3788 log.go:172] (0xc000a85ce0) Data frame received for 5\nI0505 01:04:04.132858 3788 log.go:172] (0xc000760b40) (5) Data frame handling\nI0505 01:04:04.134713 3788 log.go:172] (0xc000a85ce0) Data frame received for 1\nI0505 01:04:04.134756 3788 log.go:172] (0xc000864000) (1) Data frame handling\nI0505 01:04:04.134783 3788 log.go:172] (0xc000864000) (1) Data frame sent\nI0505 01:04:04.134826 3788 log.go:172] (0xc000a85ce0) (0xc000864000) Stream removed, broadcasting: 1\nI0505 01:04:04.134922 3788 log.go:172] (0xc000a85ce0) Go away received\nI0505 01:04:04.135345 3788 log.go:172] (0xc000a85ce0) (0xc000864000) Stream removed, broadcasting: 1\nI0505 01:04:04.135369 3788 log.go:172] (0xc000a85ce0) (0xc00085f040) Stream removed, broadcasting: 3\nI0505 01:04:04.135382 3788 log.go:172] (0xc000a85ce0) (0xc000760b40) Stream removed, broadcasting: 5\n" May 5 01:04:04.141: INFO: stdout: "" May 5 01:04:04.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1765 execpodc75vk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31298' May 5 01:04:04.338: INFO: stderr: "I0505 01:04:04.267008 3808 log.go:172] (0xc000ba98c0) (0xc000bb25a0) Create stream\nI0505 01:04:04.267091 3808 log.go:172] (0xc000ba98c0) (0xc000bb25a0) Stream added, broadcasting: 1\nI0505 01:04:04.271770 3808 log.go:172] (0xc000ba98c0) Reply frame received for 1\nI0505 01:04:04.271812 3808 log.go:172] (0xc000ba98c0) (0xc000564320) Create stream\nI0505 01:04:04.271823 3808 log.go:172] (0xc000ba98c0) (0xc000564320) Stream added, broadcasting: 3\nI0505 01:04:04.272550 3808 log.go:172] (0xc000ba98c0) Reply frame received for 3\nI0505 01:04:04.272583 3808 log.go:172] (0xc000ba98c0) (0xc000550f00) Create stream\nI0505 01:04:04.272593 3808 log.go:172] (0xc000ba98c0) (0xc000550f00) Stream added, broadcasting: 5\nI0505 01:04:04.273455 3808 log.go:172] (0xc000ba98c0) Reply frame received for 5\nI0505 01:04:04.329831 3808 log.go:172] (0xc000ba98c0) Data frame received for 5\nI0505 01:04:04.329862 3808 log.go:172] (0xc000550f00) (5) Data frame handling\nI0505 01:04:04.329877 3808 log.go:172] (0xc000550f00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31298\nI0505 01:04:04.330263 3808 log.go:172] (0xc000ba98c0) Data frame received for 5\nI0505 01:04:04.330302 3808 log.go:172] (0xc000550f00) (5) Data frame handling\nI0505 01:04:04.330339 3808 log.go:172] (0xc000550f00) (5) Data frame sent\nConnection to 172.17.0.13 31298 port [tcp/31298] succeeded!\nI0505 01:04:04.330615 3808 log.go:172] (0xc000ba98c0) Data frame received for 3\nI0505 01:04:04.330640 3808 log.go:172] (0xc000564320) (3) Data frame handling\nI0505 01:04:04.331409 3808 log.go:172] (0xc000ba98c0) Data frame received for 5\nI0505 01:04:04.331441 3808 log.go:172] (0xc000550f00) (5) Data frame handling\nI0505 01:04:04.332376 3808 log.go:172] (0xc000ba98c0) Data frame received for 1\nI0505 01:04:04.332413 3808 log.go:172] (0xc000bb25a0) (1) Data frame handling\nI0505 01:04:04.332455 3808 log.go:172] (0xc000bb25a0) (1) Data frame sent\nI0505 01:04:04.332495 3808 log.go:172] (0xc000ba98c0) (0xc000bb25a0) Stream removed, broadcasting: 1\nI0505 01:04:04.332806 3808 log.go:172] (0xc000ba98c0) Go away received\nI0505 01:04:04.333007 3808 log.go:172] (0xc000ba98c0) (0xc000bb25a0) Stream removed, broadcasting: 1\nI0505 01:04:04.333041 3808 log.go:172] (0xc000ba98c0) (0xc000564320) Stream removed, broadcasting: 3\nI0505 01:04:04.333062 3808 log.go:172] (0xc000ba98c0) (0xc000550f00) Stream removed, broadcasting: 5\n" May 5 01:04:04.338: INFO: stdout: "" May 5 01:04:04.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1765 execpodc75vk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31298' May 5 01:04:04.549: INFO: stderr: "I0505 01:04:04.471013 3829 log.go:172] (0xc0009cd1e0) (0xc000866500) Create stream\nI0505 01:04:04.471057 3829 log.go:172] (0xc0009cd1e0) (0xc000866500) Stream added, broadcasting: 1\nI0505 01:04:04.478378 3829 log.go:172] (0xc0009cd1e0) Reply frame received for 1\nI0505 01:04:04.478411 3829 log.go:172] (0xc0009cd1e0) (0xc00085b400) Create stream\nI0505 01:04:04.478420 3829 log.go:172] (0xc0009cd1e0) (0xc00085b400) Stream added, broadcasting: 3\nI0505 01:04:04.479383 3829 log.go:172] (0xc0009cd1e0) Reply frame received for 3\nI0505 01:04:04.479415 3829 log.go:172] (0xc0009cd1e0) (0xc00054cc80) Create stream\nI0505 01:04:04.479425 3829 log.go:172] (0xc0009cd1e0) (0xc00054cc80) Stream added, broadcasting: 5\nI0505 01:04:04.480276 3829 log.go:172] (0xc0009cd1e0) Reply frame received for 5\nI0505 01:04:04.541692 3829 log.go:172] (0xc0009cd1e0) Data frame received for 5\nI0505 01:04:04.541729 3829 log.go:172] (0xc00054cc80) (5) Data frame handling\nI0505 01:04:04.541745 3829 log.go:172] (0xc00054cc80) (5) Data frame sent\nI0505 01:04:04.541757 3829 log.go:172] (0xc0009cd1e0) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.12 31298\nConnection to 172.17.0.12 31298 port [tcp/31298] succeeded!\nI0505 01:04:04.541772 3829 log.go:172] (0xc00054cc80) (5) Data frame handling\nI0505 01:04:04.541819 3829 log.go:172] (0xc0009cd1e0) Data frame received for 3\nI0505 01:04:04.541845 3829 log.go:172] (0xc00085b400) (3) Data frame handling\nI0505 01:04:04.543098 3829 log.go:172] (0xc0009cd1e0) Data frame received for 1\nI0505 01:04:04.543120 3829 log.go:172] (0xc000866500) (1) Data frame handling\nI0505 01:04:04.543141 3829 log.go:172] (0xc000866500) (1) Data frame sent\nI0505 01:04:04.543226 3829 log.go:172] (0xc0009cd1e0) (0xc000866500) Stream removed, broadcasting: 1\nI0505 01:04:04.543468 3829 log.go:172] (0xc0009cd1e0) (0xc000866500) Stream removed, broadcasting: 1\nI0505 01:04:04.543481 3829 log.go:172] (0xc0009cd1e0) (0xc00085b400) Stream removed, broadcasting: 3\nI0505 01:04:04.543489 3829 log.go:172] (0xc0009cd1e0) (0xc00054cc80) Stream removed, broadcasting: 5\n" May 5 01:04:04.549: INFO: stdout: "" May 5 01:04:04.549: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:04:04.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1765" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.754 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":271,"skipped":4524,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:04:04.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 5 01:04:04.706: INFO: Waiting up to 5m0s for pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8" in namespace "emptydir-1827" to be "Succeeded or Failed" May 5 01:04:04.777: INFO: Pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.371015ms May 5 01:04:06.820: INFO: Pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113888257s May 5 01:04:08.987: INFO: Pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281359661s May 5 01:04:10.992: INFO: Pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285697534s STEP: Saw pod success May 5 01:04:10.992: INFO: Pod "pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8" satisfied condition "Succeeded or Failed" May 5 01:04:10.995: INFO: Trying to get logs from node latest-worker pod pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8 container test-container: STEP: delete the pod May 5 01:04:11.060: INFO: Waiting for pod pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8 to disappear May 5 01:04:11.082: INFO: Pod pod-b0383e81-c361-4e56-a0bd-7f2a42bcebe8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:04:11.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1827" for this suite. • [SLOW TEST:6.443 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4532,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:04:11.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 5 01:04:17.736: INFO: Successfully updated pod "annotationupdate2cac013d-802a-4c6a-bfdb-87ff669a1f9c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:04:21.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4561" for this suite. • [SLOW TEST:10.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:04:21.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0505 01:05:02.885852 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 01:05:02.885: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:05:02.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2470" for this suite. • [SLOW TEST:41.044 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":274,"skipped":4563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:05:02.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-62ce9a3d-961b-4162-9cee-fd085dbc6e6c STEP: Creating a pod to test consume configMaps May 5 01:05:03.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3" in namespace "configmap-9037" to be "Succeeded or Failed" May 5 01:05:03.023: INFO: Pod "pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.450614ms May 5 01:05:05.908: INFO: Pod "pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890258741s May 5 01:05:07.912: INFO: Pod "pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.89402136s STEP: Saw pod success May 5 01:05:07.912: INFO: Pod "pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3" satisfied condition "Succeeded or Failed" May 5 01:05:07.914: INFO: Trying to get logs from node latest-worker pod pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3 container configmap-volume-test: STEP: delete the pod May 5 01:05:08.428: INFO: Waiting for pod pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3 to disappear May 5 01:05:08.611: INFO: Pod pod-configmaps-63493aef-3149-4479-9379-aacddb63cba3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:05:08.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9037" for this suite. • [SLOW TEST:5.803 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4596,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:05:08.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 01:05:09.337: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:05:11.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6369" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":276,"skipped":4616,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:05:11.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5805 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5805 STEP: Creating statefulset with conflicting port in namespace statefulset-5805 STEP: Waiting until pod test-pod will start running in namespace statefulset-5805 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5805 May 5 01:05:17.883: INFO: Observed stateful pod in namespace: statefulset-5805, name: ss-0, uid: b796f50a-dc2c-4771-9d02-673bda602b53, status phase: Pending. Waiting for statefulset controller to delete. May 5 01:05:18.114: INFO: Observed stateful pod in namespace: statefulset-5805, name: ss-0, uid: b796f50a-dc2c-4771-9d02-673bda602b53, status phase: Failed. Waiting for statefulset controller to delete. May 5 01:05:18.165: INFO: Observed stateful pod in namespace: statefulset-5805, name: ss-0, uid: b796f50a-dc2c-4771-9d02-673bda602b53, status phase: Failed. Waiting for statefulset controller to delete. May 5 01:05:18.188: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5805 STEP: Removing pod with conflicting port in namespace statefulset-5805 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5805 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 5 01:05:22.391: INFO: Deleting all statefulset in ns statefulset-5805 May 5 01:05:22.395: INFO: Scaling statefulset ss to 0 May 5 01:05:42.478: INFO: Waiting for statefulset status.replicas updated to 0 May 5 01:05:42.482: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:05:42.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5805" for this suite. • [SLOW TEST:31.147 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":277,"skipped":4622,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:05:42.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 5 01:05:50.607: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:05:50.614: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:05:52.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:05:52.618: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:05:54.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:05:54.619: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:05:56.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:05:56.619: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:05:58.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:05:58.619: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:06:00.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:06:00.619: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:06:02.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:06:02.618: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:06:04.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:06:04.618: INFO: Pod pod-with-poststart-http-hook still exists May 5 01:06:06.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 01:06:06.618: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:06:06.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5253" for this suite. • [SLOW TEST:24.120 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:06:06.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-747 STEP: creating replication controller nodeport-test in namespace services-747 I0505 01:06:06.770464 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-747, replica count: 2 I0505 01:06:09.820865 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 01:06:12.821093 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 01:06:12.821: INFO: Creating new exec pod May 5 01:06:17.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-747 execpod5rbf7 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 5 01:06:18.084: INFO: stderr: "I0505 01:06:17.998753 3849 log.go:172] (0xc000aa3d90) (0xc00065a640) Create stream\nI0505 01:06:17.998814 3849 log.go:172] (0xc000aa3d90) (0xc00065a640) Stream added, broadcasting: 1\nI0505 01:06:18.003374 3849 log.go:172] (0xc000aa3d90) Reply frame received for 1\nI0505 01:06:18.003427 3849 log.go:172] (0xc000aa3d90) (0xc0005f1540) Create stream\nI0505 01:06:18.003444 3849 log.go:172] (0xc000aa3d90) (0xc0005f1540) Stream added, broadcasting: 3\nI0505 01:06:18.004293 3849 log.go:172] (0xc000aa3d90) Reply frame received for 3\nI0505 01:06:18.004328 3849 log.go:172] (0xc000aa3d90) (0xc0004ba280) Create stream\nI0505 01:06:18.004339 3849 log.go:172] (0xc000aa3d90) (0xc0004ba280) Stream added, broadcasting: 5\nI0505 01:06:18.005461 3849 log.go:172] (0xc000aa3d90) Reply frame received for 5\nI0505 01:06:18.077745 3849 log.go:172] (0xc000aa3d90) Data frame received for 3\nI0505 01:06:18.077816 3849 log.go:172] (0xc0005f1540) (3) Data frame handling\nI0505 01:06:18.077859 3849 log.go:172] (0xc000aa3d90) Data frame received for 5\nI0505 01:06:18.077885 3849 log.go:172] (0xc0004ba280) (5) Data frame handling\nI0505 01:06:18.077906 3849 log.go:172] (0xc0004ba280) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0505 01:06:18.077939 3849 log.go:172] (0xc000aa3d90) Data frame received for 5\nI0505 01:06:18.078006 3849 log.go:172] (0xc0004ba280) (5) Data frame handling\nI0505 01:06:18.079475 3849 log.go:172] (0xc000aa3d90) Data frame received for 1\nI0505 01:06:18.079514 3849 log.go:172] (0xc00065a640) (1) Data frame handling\nI0505 01:06:18.079565 3849 log.go:172] (0xc00065a640) (1) Data frame sent\nI0505 01:06:18.079597 3849 log.go:172] (0xc000aa3d90) (0xc00065a640) Stream removed, broadcasting: 1\nI0505 01:06:18.079626 3849 log.go:172] (0xc000aa3d90) Go away received\nI0505 01:06:18.080096 3849 log.go:172] (0xc000aa3d90) (0xc00065a640) Stream removed, broadcasting: 1\nI0505 01:06:18.080121 3849 log.go:172] (0xc000aa3d90) (0xc0005f1540) Stream removed, broadcasting: 3\nI0505 01:06:18.080133 3849 log.go:172] (0xc000aa3d90) (0xc0004ba280) Stream removed, broadcasting: 5\n" May 5 01:06:18.085: INFO: stdout: "" May 5 01:06:18.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-747 execpod5rbf7 -- /bin/sh -x -c nc -zv -t -w 2 10.110.163.239 80' May 5 01:06:18.300: INFO: stderr: "I0505 01:06:18.222236 3871 log.go:172] (0xc000ace8f0) (0xc000711040) Create stream\nI0505 01:06:18.222295 3871 log.go:172] (0xc000ace8f0) (0xc000711040) Stream added, broadcasting: 1\nI0505 01:06:18.224797 3871 log.go:172] (0xc000ace8f0) Reply frame received for 1\nI0505 01:06:18.224838 3871 log.go:172] (0xc000ace8f0) (0xc0004a8f00) Create stream\nI0505 01:06:18.224854 3871 log.go:172] (0xc000ace8f0) (0xc0004a8f00) Stream added, broadcasting: 3\nI0505 01:06:18.225986 3871 log.go:172] (0xc000ace8f0) Reply frame received for 3\nI0505 01:06:18.226047 3871 log.go:172] (0xc000ace8f0) (0xc0000ddc20) Create stream\nI0505 01:06:18.226067 3871 log.go:172] (0xc000ace8f0) (0xc0000ddc20) Stream added, broadcasting: 5\nI0505 01:06:18.226979 3871 log.go:172] (0xc000ace8f0) Reply frame received for 5\nI0505 01:06:18.292308 3871 log.go:172] (0xc000ace8f0) Data frame received for 3\nI0505 01:06:18.292352 3871 log.go:172] (0xc0004a8f00) (3) Data frame handling\nI0505 01:06:18.292410 3871 log.go:172] (0xc000ace8f0) Data frame received for 5\nI0505 01:06:18.292448 3871 log.go:172] (0xc0000ddc20) (5) Data frame handling\nI0505 01:06:18.292476 3871 log.go:172] (0xc0000ddc20) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.163.239 80\nConnection to 10.110.163.239 80 port [tcp/http] succeeded!\nI0505 01:06:18.292496 3871 log.go:172] (0xc000ace8f0) Data frame received for 5\nI0505 01:06:18.292564 3871 log.go:172] (0xc0000ddc20) (5) Data frame handling\nI0505 01:06:18.294103 3871 log.go:172] (0xc000ace8f0) Data frame received for 1\nI0505 01:06:18.294138 3871 log.go:172] (0xc000711040) (1) Data frame handling\nI0505 01:06:18.294157 3871 log.go:172] (0xc000711040) (1) Data frame sent\nI0505 01:06:18.294188 3871 log.go:172] (0xc000ace8f0) (0xc000711040) Stream removed, broadcasting: 1\nI0505 01:06:18.294211 3871 log.go:172] (0xc000ace8f0) Go away received\nI0505 01:06:18.294671 3871 log.go:172] (0xc000ace8f0) (0xc000711040) Stream removed, broadcasting: 1\nI0505 01:06:18.294696 3871 log.go:172] (0xc000ace8f0) (0xc0004a8f00) Stream removed, broadcasting: 3\nI0505 01:06:18.294708 3871 log.go:172] (0xc000ace8f0) (0xc0000ddc20) Stream removed, broadcasting: 5\n" May 5 01:06:18.300: INFO: stdout: "" May 5 01:06:18.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-747 execpod5rbf7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31546' May 5 01:06:18.536: INFO: stderr: "I0505 01:06:18.466640 3891 log.go:172] (0xc00068e790) (0xc00056fea0) Create stream\nI0505 01:06:18.466701 3891 log.go:172] (0xc00068e790) (0xc00056fea0) Stream added, broadcasting: 1\nI0505 01:06:18.469050 3891 log.go:172] (0xc00068e790) Reply frame received for 1\nI0505 01:06:18.469085 3891 log.go:172] (0xc00068e790) (0xc000530460) Create stream\nI0505 01:06:18.469093 3891 log.go:172] (0xc00068e790) (0xc000530460) Stream added, broadcasting: 3\nI0505 01:06:18.470221 3891 log.go:172] (0xc00068e790) Reply frame received for 3\nI0505 01:06:18.470278 3891 log.go:172] (0xc00068e790) (0xc000518fa0) Create stream\nI0505 01:06:18.470296 3891 log.go:172] (0xc00068e790) (0xc000518fa0) Stream added, broadcasting: 5\nI0505 01:06:18.471219 3891 log.go:172] (0xc00068e790) Reply frame received for 5\nI0505 01:06:18.528832 3891 log.go:172] (0xc00068e790) Data frame received for 5\nI0505 01:06:18.528869 3891 log.go:172] (0xc000518fa0) (5) Data frame handling\nI0505 01:06:18.528902 3891 log.go:172] (0xc000518fa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31546\nConnection to 172.17.0.13 31546 port [tcp/31546] succeeded!\nI0505 01:06:18.528995 3891 log.go:172] (0xc00068e790) Data frame received for 5\nI0505 01:06:18.529018 3891 log.go:172] (0xc000518fa0) (5) Data frame handling\nI0505 01:06:18.529550 3891 log.go:172] (0xc00068e790) Data frame received for 3\nI0505 01:06:18.529591 3891 log.go:172] (0xc000530460) (3) Data frame handling\nI0505 01:06:18.531403 3891 log.go:172] (0xc00068e790) Data frame received for 1\nI0505 01:06:18.531438 3891 log.go:172] (0xc00056fea0) (1) Data frame handling\nI0505 01:06:18.531466 3891 log.go:172] (0xc00056fea0) (1) Data frame sent\nI0505 01:06:18.531489 3891 log.go:172] (0xc00068e790) (0xc00056fea0) Stream removed, broadcasting: 1\nI0505 01:06:18.531522 3891 log.go:172] (0xc00068e790) Go away received\nI0505 01:06:18.531936 3891 log.go:172] (0xc00068e790) (0xc00056fea0) Stream removed, broadcasting: 1\nI0505 01:06:18.531959 3891 log.go:172] (0xc00068e790) (0xc000530460) Stream removed, broadcasting: 3\nI0505 01:06:18.531969 3891 log.go:172] (0xc00068e790) (0xc000518fa0) Stream removed, broadcasting: 5\n" May 5 01:06:18.536: INFO: stdout: "" May 5 01:06:18.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-747 execpod5rbf7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31546' May 5 01:06:18.774: INFO: stderr: "I0505 01:06:18.693995 3913 log.go:172] (0xc0009c9130) (0xc000af6460) Create stream\nI0505 01:06:18.694067 3913 log.go:172] (0xc0009c9130) (0xc000af6460) Stream added, broadcasting: 1\nI0505 01:06:18.698998 3913 log.go:172] (0xc0009c9130) Reply frame received for 1\nI0505 01:06:18.699046 3913 log.go:172] (0xc0009c9130) (0xc000856f00) Create stream\nI0505 01:06:18.699062 3913 log.go:172] (0xc0009c9130) (0xc000856f00) Stream added, broadcasting: 3\nI0505 01:06:18.700136 3913 log.go:172] (0xc0009c9130) Reply frame received for 3\nI0505 01:06:18.700184 3913 log.go:172] (0xc0009c9130) (0xc00056a1e0) Create stream\nI0505 01:06:18.700211 3913 log.go:172] (0xc0009c9130) (0xc00056a1e0) Stream added, broadcasting: 5\nI0505 01:06:18.701266 3913 log.go:172] (0xc0009c9130) Reply frame received for 5\nI0505 01:06:18.762704 3913 log.go:172] (0xc0009c9130) Data frame received for 3\nI0505 01:06:18.762739 3913 log.go:172] (0xc000856f00) (3) Data frame handling\nI0505 01:06:18.763371 3913 log.go:172] (0xc0009c9130) Data frame received for 5\nI0505 01:06:18.763407 3913 log.go:172] (0xc00056a1e0) (5) Data frame handling\nI0505 01:06:18.763436 3913 log.go:172] (0xc00056a1e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31546\nConnection to 172.17.0.12 31546 port [tcp/31546] succeeded!\nI0505 01:06:18.763533 3913 log.go:172] (0xc0009c9130) Data frame received for 5\nI0505 01:06:18.763679 3913 log.go:172] (0xc00056a1e0) (5) Data frame handling\nI0505 01:06:18.769894 3913 log.go:172] (0xc0009c9130) Data frame received for 1\nI0505 01:06:18.769908 3913 log.go:172] (0xc000af6460) (1) Data frame handling\nI0505 01:06:18.769921 3913 log.go:172] (0xc000af6460) (1) Data frame sent\nI0505 01:06:18.769930 3913 log.go:172] (0xc0009c9130) (0xc000af6460) Stream removed, broadcasting: 1\nI0505 01:06:18.769940 3913 log.go:172] (0xc0009c9130) Go away received\nI0505 01:06:18.770264 3913 log.go:172] (0xc0009c9130) (0xc000af6460) Stream removed, broadcasting: 1\nI0505 01:06:18.770285 3913 log.go:172] (0xc0009c9130) (0xc000856f00) Stream removed, broadcasting: 3\nI0505 01:06:18.770292 3913 log.go:172] (0xc0009c9130) (0xc00056a1e0) Stream removed, broadcasting: 5\n" May 5 01:06:18.774: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:06:18.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-747" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.150 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":279,"skipped":4687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:06:18.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 01:06:18.828: INFO: Creating deployment "webserver-deployment" May 5 01:06:18.836: INFO: Waiting for observed generation 1 May 5 01:06:20.856: INFO: Waiting for all required pods to come up May 5 01:06:21.038: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 5 01:06:31.082: INFO: Waiting for deployment "webserver-deployment" to complete May 5 01:06:31.278: INFO: Updating deployment "webserver-deployment" with a non-existent image May 5 01:06:31.347: INFO: Updating deployment webserver-deployment May 5 01:06:31.347: INFO: Waiting for observed generation 2 May 5 01:06:34.073: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 5 01:06:34.782: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 5 01:06:35.007: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 5 01:06:35.737: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 5 01:06:35.737: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 5 01:06:35.739: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 5 01:06:35.883: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 5 01:06:35.883: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 5 01:06:36.091: INFO: Updating deployment webserver-deployment May 5 01:06:36.091: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 5 01:06:37.698: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 5 01:06:40.372: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 5 01:06:40.995: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2517 /apis/apps/v1/namespaces/deployment-2517/deployments/webserver-deployment ef48923a-4f5d-41d5-ab6e-212003e969bc 1542415 3 2020-05-05 01:06:18 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-05 01:06:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00427dc78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-05 01:06:37 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-05 01:06:38 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 5 01:06:41.395: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-2517 /apis/apps/v1/namespaces/deployment-2517/replicasets/webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 1542403 3 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ef48923a-4f5d-41d5-ab6e-212003e969bc 0xc00433c127 0xc00433c128}] [] [{kube-controller-manager Update apps/v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef48923a-4f5d-41d5-ab6e-212003e969bc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00433c1a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 01:06:41.395: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 5 01:06:41.395: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-2517 /apis/apps/v1/namespaces/deployment-2517/replicasets/webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 1542396 3 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ef48923a-4f5d-41d5-ab6e-212003e969bc 0xc00433c207 0xc00433c208}] [] [{kube-controller-manager Update apps/v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef48923a-4f5d-41d5-ab6e-212003e969bc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00433c278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 5 01:06:41.838: INFO: Pod "webserver-deployment-6676bcd6d4-5dx8q" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5dx8q webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-5dx8q 3bfc9fe2-9660-4032-bae7-161c82360b90 1542313 0 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433c7b7 0xc00433c7b8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.839: INFO: Pod "webserver-deployment-6676bcd6d4-5kpww" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5kpww webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-5kpww 36db6360-87ae-416f-8f45-6107c87590fd 1542440 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433c967 0xc00433c968}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.839: INFO: Pod "webserver-deployment-6676bcd6d4-685gs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-685gs webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-685gs f95ab988-b4f9-4675-9b74-d184878b8403 1542431 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433cb17 0xc00433cb18}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.839: INFO: Pod "webserver-deployment-6676bcd6d4-6pv4q" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6pv4q webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-6pv4q fb11924c-82a9-422f-aa93-68a779eaac55 1542462 0 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433ccc7 0xc00433ccc8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.213,StartTime:2020-05-05 01:06:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.840: INFO: Pod "webserver-deployment-6676bcd6d4-b76tm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b76tm webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-b76tm 68599996-628b-4112-95bd-7561e79a04f0 1542448 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433cea7 0xc00433cea8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.840: INFO: Pod "webserver-deployment-6676bcd6d4-h4tv9" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h4tv9 webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-h4tv9 f9389b92-e39e-4854-ac06-26428bd1cbc2 1542425 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d057 0xc00433d058}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.840: INFO: Pod "webserver-deployment-6676bcd6d4-jthj5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jthj5 webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-jthj5 fccb19cc-1f59-4688-bde5-56112ce89e3f 1542456 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d207 0xc00433d208}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.840: INFO: Pod "webserver-deployment-6676bcd6d4-k58kv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-k58kv webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-k58kv aa8cc65f-cb4c-4426-a9ee-861ab980de1c 1542302 0 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d3b7 0xc00433d3b8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.841: INFO: Pod "webserver-deployment-6676bcd6d4-lsjm5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lsjm5 webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-lsjm5 2bb60447-c996-4d5f-bea5-19a02a5eebc6 1542324 0 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d567 0xc00433d568}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.841: INFO: Pod "webserver-deployment-6676bcd6d4-nqmfk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nqmfk webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-nqmfk 0d593121-4c17-4333-8151-aa3a50e8c639 1542450 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d717 0xc00433d718}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.841: INFO: Pod "webserver-deployment-6676bcd6d4-p4p92" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p4p92 webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-p4p92 5f19e5aa-f57b-4b5f-acdd-52d7efa9d835 1542465 0 2020-05-05 01:06:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433d8c7 0xc00433d8c8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.842: INFO: Pod "webserver-deployment-6676bcd6d4-pxf5l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pxf5l webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-pxf5l 11ae8e5b-66ed-4c3f-ba36-7d1f5882537b 1542412 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433da77 0xc00433da78}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.842: INFO: Pod "webserver-deployment-6676bcd6d4-vlt8m" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vlt8m webserver-deployment-6676bcd6d4- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-6676bcd6d4-vlt8m 8e1f2db3-0d39-472d-9248-b95c56338323 1542374 0 2020-05-05 01:06:31 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f7c0ce5a-33a1-4895-8600-80c35c95bff4 0xc00433dc37 0xc00433dc38}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7c0ce5a-33a1-4895-8600-80c35c95bff4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.120,StartTime:2020-05-05 01:06:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.842: INFO: Pod "webserver-deployment-84855cf797-5679n" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5679n webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-5679n 80d78678-0d6e-4abc-8d91-ea54b91788ba 1542246 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc00433de17 0xc00433de18}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.119\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.119,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://45f29f41c19b30ceac3f294227c0e918dd166afc815e35f282efcc5027154511,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.119,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.842: INFO: Pod "webserver-deployment-84855cf797-5zxcz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5zxcz webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-5zxcz 208d0868-cfe1-499c-ba61-2d8b8e122b9d 1542206 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc00433dfc7 0xc00433dfc8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.116\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.116,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9055e2165c61f8d4a7965ffd09890caebf461e6470bb0594139424e9845c8d09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.843: INFO: Pod "webserver-deployment-84855cf797-7dlqp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7dlqp webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-7dlqp 58e23d62-2ec5-4356-bdb2-fd61c32460ee 1542418 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354177 0xc004354178}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.843: INFO: Pod "webserver-deployment-84855cf797-8g6ck" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8g6ck webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-8g6ck b2a98f30-fc60-48c7-8968-e9aad05492af 1542419 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354327 0xc004354328}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.843: INFO: Pod "webserver-deployment-84855cf797-bdjjz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bdjjz webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-bdjjz 96beefca-8e00-4fda-8e93-aeee668e5310 1542423 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc0043544b7 0xc0043544b8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.843: INFO: Pod "webserver-deployment-84855cf797-cc7mh" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cc7mh webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-cc7mh 247c81d5-fe79-4ebe-a59b-52ae9e12b37e 1542411 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354657 0xc004354658}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.844: INFO: Pod "webserver-deployment-84855cf797-dml2p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dml2p webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-dml2p b0a98102-c94a-4d12-89a3-896443b9a34e 1542400 0 2020-05-05 01:06:36 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc0043547e7 0xc0043547e8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.844: INFO: Pod "webserver-deployment-84855cf797-f8v8v" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f8v8v webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-f8v8v 21bfe503-156d-4a2c-9fca-43d5a28c05be 1542166 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354977 0xc004354978}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.208\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.208,StartTime:2020-05-05 01:06:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed4bed55f3addd54482650382a5fdf23f64ceeee28a17a77848a556375899025,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.844: INFO: Pod "webserver-deployment-84855cf797-h7csq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h7csq webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-h7csq b9eaa692-7352-40cf-9a52-73439c43aa9f 1542192 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354b37 0xc004354b38}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.209\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.209,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://727bb1cffb23549cc1df5b7a42215f85dce0a9c26ff768d3c40aeb22b2a189e7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.844: INFO: Pod "webserver-deployment-84855cf797-h94vw" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h94vw webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-h94vw aaca3115-dff6-45e7-a871-1537b14a0334 1542435 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354ce7 0xc004354ce8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.844: INFO: Pod "webserver-deployment-84855cf797-h97ch" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h97ch webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-h97ch 4acb3a26-8ee9-480f-a491-d8bdd5d0f009 1542437 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004354e87 0xc004354e88}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.845: INFO: Pod "webserver-deployment-84855cf797-hvrr2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hvrr2 webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-hvrr2 a41ce0d7-5592-4b56-9eef-3541a258ca93 1542401 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355017 0xc004355018}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.845: INFO: Pod "webserver-deployment-84855cf797-jsp7v" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jsp7v webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-jsp7v 7fc22681-3ace-4fe2-ac5b-29feba0a1097 1542430 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc0043551b7 0xc0043551b8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.845: INFO: Pod "webserver-deployment-84855cf797-l7j2h" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-l7j2h webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-l7j2h c2a14a1a-8b06-4503-8736-147af894e888 1542212 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355347 0xc004355348}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.117,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://befcaaad61bf6d9fc335a7287c21dc4f25eb34fd09c0ee7e74b2d6ed8a033c57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.845: INFO: Pod "webserver-deployment-84855cf797-sbd8h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sbd8h webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-sbd8h 2ee37d45-2156-411e-9c62-47a1e9097491 1542461 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355507 0xc004355508}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.845: INFO: Pod "webserver-deployment-84855cf797-v4lxj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v4lxj webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-v4lxj e794eff5-3315-40ab-a8b4-ff7c61f9a557 1542176 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355697 0xc004355698}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.115,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://68753a87b54344ba12a38192ab77e9066b619c987a190daf49636a5877c44936,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.846: INFO: Pod "webserver-deployment-84855cf797-v5mk5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v5mk5 webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-v5mk5 b5570f93-29a0-45d6-b447-9c35242bad22 1542457 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355847 0xc004355848}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.846: INFO: Pod "webserver-deployment-84855cf797-vclxs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vclxs webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-vclxs f87ce805-c76c-4927-802a-63196069f8d3 1542442 0 2020-05-05 01:06:37 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc0043559d7 0xc0043559d8}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-05 01:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.846: INFO: Pod "webserver-deployment-84855cf797-wdxgt" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wdxgt webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-wdxgt b44ea4d5-dd13-40d4-bdad-f97df31e1d79 1542198 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355b67 0xc004355b68}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.210\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.210,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aaf2c01c7e1cb5bc619bde8a12b7b61ea4d01e926ab8766c7ad1a17699efb18a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 01:06:41.846: INFO: Pod "webserver-deployment-84855cf797-xghr6" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xghr6 webserver-deployment-84855cf797- deployment-2517 /api/v1/namespaces/deployment-2517/pods/webserver-deployment-84855cf797-xghr6 9c5cb352-07a2-4603-a924-185436ad38bf 1542242 0 2020-05-05 01:06:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f9d7a71c-9816-49d5-baaf-fb9260feda70 0xc004355d17 0xc004355d18}] [] [{kube-controller-manager Update v1 2020-05-05 01:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9d7a71c-9816-49d5-baaf-fb9260feda70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-05 01:06:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fffk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fffk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fffk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 01:06:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.118,StartTime:2020-05-05 01:06:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 01:06:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf1e289c696047a5c464a32f6938897ebb04814e7cbe1a1e4f65bb244bd77123,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:06:41.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2517" for this suite. • [SLOW TEST:24.697 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":280,"skipped":4716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:06:43.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-75a70026-9130-4111-ad99-2f27fa296297 STEP: Creating a pod to test consume configMaps May 5 01:06:46.455: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0" in namespace "configmap-3235" to be "Succeeded or Failed" May 5 01:06:47.058: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Pending", Reason="", readiness=false. Elapsed: 603.319413ms May 5 01:06:49.091: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635623726s May 5 01:06:51.781: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.326402613s May 5 01:06:54.438: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.982955238s May 5 01:06:56.571: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115512592s May 5 01:06:58.635: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.180188983s STEP: Saw pod success May 5 01:06:58.635: INFO: Pod "pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0" satisfied condition "Succeeded or Failed" May 5 01:06:58.642: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0 container configmap-volume-test: STEP: delete the pod May 5 01:06:58.709: INFO: Waiting for pod pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0 to disappear May 5 01:06:58.726: INFO: Pod pod-configmaps-6b8d3ebe-f82f-4da5-93d9-49e052ec15f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:06:58.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3235" for this suite. • [SLOW TEST:15.284 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4740,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:06:58.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 01:06:59.013: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:06:59.030: INFO: Number of nodes with available pods: 0 May 5 01:06:59.030: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:00.380: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:01.079: INFO: Number of nodes with available pods: 0 May 5 01:07:01.079: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:02.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:02.639: INFO: Number of nodes with available pods: 0 May 5 01:07:02.639: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:03.151: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:03.500: INFO: Number of nodes with available pods: 0 May 5 01:07:03.500: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:04.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:04.224: INFO: Number of nodes with available pods: 0 May 5 01:07:04.224: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:05.948: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:06.008: INFO: Number of nodes with available pods: 1 May 5 01:07:06.008: INFO: Node latest-worker is running more than one daemon pod May 5 01:07:06.544: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:06.628: INFO: Number of nodes with available pods: 2 May 5 01:07:06.628: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 5 01:07:07.190: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 01:07:07.499: INFO: Number of nodes with available pods: 2 May 5 01:07:07.499: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7478, will wait for the garbage collector to delete the pods May 5 01:07:09.410: INFO: Deleting DaemonSet.extensions daemon-set took: 46.767926ms May 5 01:07:10.410: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000345003s May 5 01:07:25.314: INFO: Number of nodes with available pods: 0 May 5 01:07:25.314: INFO: Number of running nodes: 0, number of available pods: 0 May 5 01:07:25.316: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7478/daemonsets","resourceVersion":"1542975"},"items":null} May 5 01:07:25.348: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7478/pods","resourceVersion":"1542975"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:07:25.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7478" for this suite. • [SLOW TEST:26.598 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":282,"skipped":4759,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:07:25.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 5 01:09:25.960: INFO: Successfully updated pod "var-expansion-27add68f-01ae-4a9b-8d3d-c523ae648421" STEP: waiting for pod running STEP: deleting the pod gracefully May 5 01:09:28.087: INFO: Deleting pod "var-expansion-27add68f-01ae-4a9b-8d3d-c523ae648421" in namespace "var-expansion-1028" May 5 01:09:28.092: INFO: Wait up to 5m0s for pod "var-expansion-27add68f-01ae-4a9b-8d3d-c523ae648421" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:10:06.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1028" for this suite. • [SLOW TEST:160.784 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":283,"skipped":4760,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:10:06.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 01:10:06.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7" in namespace "downward-api-4808" to be "Succeeded or Failed" May 5 01:10:06.314: INFO: Pod "downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.243523ms May 5 01:10:08.319: INFO: Pod "downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025045594s May 5 01:10:10.354: INFO: Pod "downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060001006s STEP: Saw pod success May 5 01:10:10.354: INFO: Pod "downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7" satisfied condition "Succeeded or Failed" May 5 01:10:10.360: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7 container client-container: STEP: delete the pod May 5 01:10:10.405: INFO: Waiting for pod downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7 to disappear May 5 01:10:10.410: INFO: Pod downwardapi-volume-6c8ff810-17ba-4557-9324-9951740904e7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:10:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4808" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4763,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:10:10.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 5 01:10:10.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf" in namespace "downward-api-3012" to be "Succeeded or Failed" May 5 01:10:10.806: INFO: Pod "downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 70.02915ms May 5 01:10:12.811: INFO: Pod "downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074985945s May 5 01:10:14.815: INFO: Pod "downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079059981s STEP: Saw pod success May 5 01:10:14.815: INFO: Pod "downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf" satisfied condition "Succeeded or Failed" May 5 01:10:14.818: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf container client-container: STEP: delete the pod May 5 01:10:14.867: INFO: Waiting for pod downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf to disappear May 5 01:10:14.872: INFO: Pod downwardapi-volume-f5fe7f80-e8ad-47ca-b212-d2725f6adaaf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:10:14.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3012" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":285,"skipped":4775,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:10:14.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 01:10:15.415: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 01:10:17.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724237815, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724237815, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724237815, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724237815, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 01:10:20.465: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 01:10:20.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7424-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:10:21.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7508" for this suite. STEP: Destroying namespace "webhook-7508-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":286,"skipped":4775,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:10:21.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 5 01:10:21.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 5 01:10:22.459: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:22Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:22Z]] name:name1 resourceVersion:1543681 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cfc101f6-526a-42d3-af2d-84c4f56216e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 5 01:10:32.466: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:32Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:32Z]] name:name2 resourceVersion:1543727 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:841c7de0-915b-4266-981e-08c7e6bfb3aa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 5 01:10:42.473: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:22Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:42Z]] name:name1 resourceVersion:1543757 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cfc101f6-526a-42d3-af2d-84c4f56216e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 5 01:10:52.494: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:32Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:52Z]] name:name2 resourceVersion:1543784 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:841c7de0-915b-4266-981e-08c7e6bfb3aa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 5 01:11:02.503: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:22Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:42Z]] name:name1 resourceVersion:1543813 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cfc101f6-526a-42d3-af2d-84c4f56216e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 5 01:11:12.511: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T01:10:32Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-05T01:10:52Z]] name:name2 resourceVersion:1543843 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:841c7de0-915b-4266-981e-08c7e6bfb3aa] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:11:23.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9822" for this suite. • [SLOW TEST:61.323 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":287,"skipped":4782,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 5 01:11:23.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 01:11:23.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5912' May 5 01:11:23.208: INFO: stderr: "" May 5 01:11:23.208: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 5 01:11:28.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5912 -o json' May 5 01:11:28.365: INFO: stderr: "" May 5 01:11:28.365: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-05T01:11:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-05T01:11:23Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.232\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-05T01:11:27Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5912\",\n \"resourceVersion\": \"1543901\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5912/pods/e2e-test-httpd-pod\",\n \"uid\": \"3f155082-ba3a-4559-b6ca-28d99c522b14\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-htxsz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-htxsz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-htxsz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T01:11:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T01:11:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T01:11:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T01:11:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://460e2f729382bfded6256b7a631a98431837fad1f462eb3b4a3939fac83cb30b\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-05T01:11:26Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.232\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.232\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-05T01:11:23Z\"\n }\n}\n" STEP: replace the image in the pod May 5 01:11:28.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5912' May 5 01:11:28.747: INFO: stderr: "" May 5 01:11:28.747: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 5 01:11:28.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5912' May 5 01:11:34.845: INFO: stderr: "" May 5 01:11:34.846: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 5 01:11:34.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5912" for this suite. • [SLOW TEST:11.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":288,"skipped":4804,"failed":0} SSMay 5 01:11:34.855: INFO: Running AfterSuite actions on all nodes May 5 01:11:34.855: INFO: Running AfterSuite actions on node 1 May 5 01:11:34.855: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4806,"failed":0} Ran 288 of 5094 Specs in 5639.946 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4806 Skipped PASS